We’ve all seen movies about artificial intelligence getting smarter than us and overthrowing our stronghold to make us obsolete, but how close are we to total subordination, and is the fear of losing jobs to robots as real as we are led to believe?
In this informative and witty talk by Rick Nucci – CEO and co-founder of Guru (a company based on empowering others to do their jobs) – we see that there are many factors to take into consideration before we start throwing in the towel on humanity and that the hype of every innovation gone before was the big thing to fear.
Rick is currently the chair of one of the most active entrepreneur groups in Philadelphia. He also an active blogger, and frequent speaker at industry events about startups, SaaS, and cloud computing.
Learn how great companies are run
At BoS we run events and publish content that is highly valued by anyone trying to build, run, and scale a great software company.
Sign up for a regular dose of actionable and useful content:
Hello. Good afternoon. My name’s Rick. Great to see everyone.
I didn’t mean to advance that quite so quickly. Hello. Welcome back from lunch. I’m going to try to re-caffeinate you with my words we’ll see how successful I am. This is my first time here. If my white lanyard wasn’t tell-tale enough my choice of song probably gave that away. That’s probably the opposite of Motorhead wouldn’t you say? If you would say what’s the extreme opposite a Grateful Dead song would probably be perfect.
It’s actually a cover of Touch of Gray by War On Drugs who’re from Philadelphia which is where I’m from – favorite city in the country. [APPLAUSE FROM CROWD] Philadelphians! That’s fantastic. Cool.
So I’m going to I’m going to talk to you guys today about A.I., hype, and the future of humanity it sounds a bit lofty. The last time I was in front of a group of people was not that long ago but it was to officiate a wedding ceremony. First time I ever done that I had a blast doing it. My goal for that ceremony was I wanted to have the audience laugh a little and cry a little. I accomplished that goal. My goal today is to minimize the number of eye rolls that happen across this audience. Because we are talking about A.I. after all.
All right so as I mentioned my name is Rick. I am the co-founder and CEO of Guru; headquartered in Philadelphia, Pennsylvania. Before that I started a company called Boomi which was in the cloud integration world. We started that company in 2000 and that company was eventually acquired by Dell many years later. Working there and going through the growth of Boomi and then working at Dell I lived the pain that Guru now solves and so left and started Guru in 2013.
So you know a lot of what I’ll talk about and when I talk about hype I’m talking about in the context of enterprise software and kind of going through various hype cycles. Being a participant in them, observing them, and laughing at them. And so I’ll share some reflections on that. I’ll also talk a little bit about some of the work we did at Guru. The goal being to share some things that I think went well and to share some things that didn’t go so well and hopefully you can, if you’re thinking about embarking on any AI projects, maybe a few of those nuggets could help you step over some things. Cool.
I mentioned starting in 2013. The thing we talk about in every town hall is we believe the knowledge you need to do your job should find you when you need it. So we’re trying to change the behavior of searching for something out of context, wondering if it’s right, which is typically what happens when you use things like Wikis and instead bringing that into your workflow. We work with a lot of sales and customer service organizations – customer service is something I’m particularly excited about and we’ll be sharing some thoughts here today and tend to focus on companies that are sort of growing quickly adding a lot of people to their customer facing team – So growing a lot of sales folks growing a lot of customer service where they’re feeling the pain of this.
We talk about this sort of this loop we collect, we verify the accuracy, and we embed and empower that knowledge and make it accessible to folks who need it, to give a little bit of context about what Guru does. Enough about that, move on here to talking a little bit about AI. I always like to start with a quick – there’s not going to get technical at all, sorry if that was your hope my guess is it wasn’t. But I do like to level set some of the terminology that people tend to throw around a bit liberally when they talk about artificial intelligence. So obviously at the very top there we have the field of computer science, within that is artificial intelligence incorporating human intelligence/simulated human intelligence into machines and then within a AI are sub fields one of which is called machine learning (ML) which is a phrase that you probably hear people use interchangeably with AI – it’s actually a subset of A.I. – NLP (natural language processing) another subset or field the AI there’s a ton more; there’s vision, there’s lots of things being done I won’t try to talk about all of them. Within ML there’s a very specific subfield that people are particularly excited about these days called deep learning where if you have access to Google amounts of data you can do a pretty amazing set of predictive outcomes using that data. Data being the real keyword there which is something I’ll talk and share more about.
The A.I. spring
It’s a really interesting time right now in the world of A.I. This is the only technology, that I know of, that we’ve talked about since the 1950s and you can go back and you can look at the old videos – if you’re bored they’re fascinating – of people showing the first simulations of human intelligence but this technology has been around for so long that it has seasons associated with it and when things are going well it’s an A.I. spring and when things are going bad it’s an AI winter. There’s been lots of these ups and downs over the decades. And right now we’re in an A.I. spring. There’s a couple of reasons why that’s happening.
First and foremost if you think about two of the fundamental things that are needed to make a work it’s data and lots of it and processing power. Cloud computing played a huge role in enabling more and more of us to be able to do those two things really well. The cost to store data has gone way down. The cost to compute vast amounts of data has gone way down. Sitting here by the time I’m done talking you could set up your own very simple machine learning model and probably train it by the time we’re done. I don’t know that it would do anything useful maybe predict the next shirt Mark’s going to wear but nonetheless you could do that and yeah pretty spot on.
So that’s played a huge role. The other thing that’s played a huge role I’m not trying to coin a phrase here by saying enterprise UX, what I’m talking about is we used to go to work and we used to think “Oh yeah, the tools I use at work suck, but that’s my job, so it’s kind of I deal with that and the stuff I enjoy using is like my Spotify and the things I use like on my own time” and that’s changing now. I think, as software companies, the UX game is being upped universally and great user experiences have a direct correlation to the ability to do AI. Because what they do is, they facilitate the natural and simple input of data and it’s data that makes the A.I. actually work
So that movement, although you could call it a side effect, has played a huge role in what we now call the spring. And it’s good times, it really is. It’s very exciting, there’s lots of stuff going on and there’s lots of venture money funding a lot of garage ideas and there’s a lot of really brilliant people working on lots of different things in the field of A.I. But what’s interesting about that is when those innovation spikes happen, we enter a world of hype. And the reason why I cracked the joke about the eye rolling is perhaps you know no greater time have we been so full of hype as a technology industry than things surrounding A.I. and a few other technologies. Gartner, this is my favorite thing they do, they put out every year what they call the hype cycle they just put this one out last month. And on it they plot every technology trend and they predict that every technology will go through this arc: where something will go up the curve as an exciting innovation happens that sparks this new thing that could happen. It gets to the top of that curve and it goes down into the trough of disillusionment – which sounds so dramatic. Where you know all of the bullshit that all the vendors said the thing could do it can actually do half that stuff. And as businesses you’re picking up the pieces and going what do we spend all this money on. And then coming across is the slope of enlightenment then the plateau of productivity. So it really is amazing. And so every year they take all their technologies and they plot on this path and it’s amazing to watch it happen. Some things don’t actually make it off the curve. They fall off.
If you look (and it’s an eye chart, so sorry) but a number of things: deep learning, virtual assistants are making their way down into the trough of disillusionment.
Artificial General Intelligence
They’re saying more than 10 years away and I’ll get into that. It’s an important nuance. Artificial General Intelligence is what people write movies about. It’s this true sentient, self thinking, self learning thing. That’s a pretty far away concept. And so taking those sort of sub fields of A.I. plotting them on this curve we’re kind of going into this trough of disillusionment. This is that same one from 2009 and 2009. Cloud computing was on the same exact path and trajectory as a I was in 2009 you had just as many people talking about it using words that didn’t make any sense. You had just as many people dismissing it as a fad where it wouldn’t go anywhere, and you had just as many people doing genuine interesting innovative things at that time. And if you look at the top of the peak of inflated expectations there sits cloud computing today obviously it has since over those time over those years come down and hit its plateau of productivity and now as most of how we think about modern computing applications. And so as things go through this and you know when we were launching Boomi the company I started before we launched in 2008 a cloud based offering right in the middle of all of this. And it’s just been very fascinating to me to observe the behaviors that happen because there’s definitely some interesting patterns that kind of go on. Here’s a good example. We see articles come out like this “A.I. will soon write better novels than humans according to a computer scientist”. A few things on this. One that will not happen. A.I. will not write better novels than humans according to any computer scientist. Two, this computer scientist didn’t actually say that if you go on to read the article they said perhaps in 20 or more years. I would argue if that’s even going to happen but things get out of control.
Jargon appears everywhere. It’s interesting in A.I. in particular it’s a very technical field and so you have a lot of data scientists very involved in the inception of these ideas and driving a lot of the innovation and you see words and phrases like this put together; “makes use of machine learning, deep learning, and transfer learning to build a unique answer graph.”
If you’re a customer service leader do you know what that means? That is not something that’s talking about how your organization will actually perform and do its job better. “A.I. delivered by A.I.” That was my personal favorite. I saw that at an actual conference. It was the actual stand up banner. It’s so meta. That’s what I love about it doesn’t really mean anything though. “We train a deep neural network model by converting historical customer service transcripts into numerical representations called word vectors.” My point is these things are being put front and center to describe this technology. This is not helping business leaders understand what the actual capabilities are and what can be done. And so this is a lot of behavior that happens. Now, with AI where it gets really interesting is there’s a whole litany of movies about A.I. and none of them end well. They all end with the machine winning. If you haven’t seen these movies summary humans lose. Actually, it’s not sure and some of the cases we don’t know how Westworld’s going to end. We’ll find out some of you might know. I don’t know. But things go really really badly. This fuels the fear. That A.I. is going to impact humans that the fear that A.I. will replace jobs and it’s a real it’s a real thing
Andrew Yang is running for president in 2020. His campaign platform is universal basic income. Universal basic income is an idea that’s made popular in the Bay Area mostly that while we’re transitioning to a world of A.I. driven automation that we’re going to provide an income to help people do career transitions it seems to me a bit like you’re giving up right there’s probably better ways I think to try to stave that off. But nonetheless someone spending a lot of time on this. The first A.I. church now exists. It’s called The Way of the future. This church was started last year. I’m not kidding. This church was started last year. You will be able to talk to God literally and know that it’s listening. So the view and the theory behind this religion is that rather than fighting the machines, let’s go ahead and pre-emptively surrender and just sort of praise our new sentient overlords because they’ll do right by us. So it’s a real thing. But let’s kind of come back down to earth and talk about where things really are at today. So this is a really interesting way to try to connect the dots between technology and biology. And I’m not going to get technical but remember that A.I. is simulated human intelligence. One way that you can do that comparison is by looking at the mental capacity of an A.I. system and comparing that to the mental capacity of an animal. And so along that line that’s what’s happening is essentially the size the mental capacity of Neural Networks which is again one of those sub fields of machine learning specifically built to emulate the way a biological brain functions is growing at a nonlinear rate. And that’s pretty cool. Today 2018. What about the capacity of a frog to put that in context. Right. We’re far away from where a human mind works and operates. As a frog there are things already that machines can do better than we can: math. But there are things that a machine can’t touch: like human empathy. And that’s really where I think it’s important to stay focused and framing the conversation this way can be a helpful way to level set and to really and to really get an idea contextually of what’s capable right now. Again the exciting thing is it is kind of going in an interesting direction and it will grow not linearly
One of the biggest things that will be correlated to that linear growth is inputs. Remember A.I. systems feed off of data. The data comes from us and the things we do. So it started with text and keyboards then phones came out and we actually went backwards a little bit because we went from being able to type with ten fingers to two thumbs. And that slowed down our ability to do input. Now voice has become a proven usage pattern. We probably a lot of us probably have some sort of Alexa device in our homes, speech to text is real. It’s fast enough it’s accurate enough it’s a real thing now. The Google 411 system did anybody ever use Google 411 system. Ever hear of that? Yeah it’s a brilliant way to train and A.I. system right. Because you would call it. You would say what you wanted it would come back with the result and say oh if you want me to connect you press one which is the feedback loop that knows that they got the thing you said right and then in true Google fashion they just turned it off. Once they have enough training data like oh no we didn’t actually want to help you. We just want training data. And so they got that and they and built a brilliant speech deck system. You know one of Elon Musk’s projects of his many neural link is taking this to the next level and literally hooking your brain interfacing directly with your brain which in many regards is the mother of all inputs right. If you could if you could input into a system as fast as you can think the rate of growth of these of these A.I. systems and so the inputs are a very interesting way to think about both the constraints we live on today the A.I. systems and also how that could kind of kind of open up.
This is a really fascinating way that I like to think about the other impediment, or reality, of A.I. which is, as humans, our ability to adapt to any technology change. But A.I. is sort of a big one. So this is called The Telegraph and is from a great book called ‘Thank you for being late’ written by Thomas Friedman that I recommend. I love his take on it. He’s combining technology globalization and environmental you know uncertainty with an optimistic view. And I just appreciate the way he kind of thinks and talks about that. But he sat down with Edward Teller who runs the x projects at Google – all of the moon-shot ideas really amazing things going on there. And when he was talking to him. Teller drew this graph. And he drew those two lines and the first line he drew was technology. And what he was showing there and that’s the line that sloping like that is the nonlinear rate of change happening with technology advancement and how we are progressing as a society but new we’re not doing in a linear way. It’s getting faster and faster and faster. But just because that’s happening the other line human adaptability doesn’t mean that humans are going at the same pace. And he believes that we already have a gap. We are here means that we’re now at a place where the technology advancement has eclipsed humans ability to adapt to it.
And his point is that’s a problem. And as an optimist that’s a problem for the good and the positive outcomes and use of technology. It’s also a problem if you look at it the other way and think about the likelihood of technologies replacing jobs. It’s the rate of learning and the rate of reskilling the rate of developing new skills to keep ourselves in a mode where we’re evolving isn’t happening fast enough is the argument he’s making. And so I think it’s important to level set that and keep that connection to humans top of mind which I’ll come back to here a few other times. So M.I.T. and Boston Consulting Group recently did a study because something that people always talk about is like is any shit actually happening in A.I., are we just talking about you know hackathon and V.C. backed companies. So they survey 3000 companies they group them into these buckets of pioneers and passives and folks in the middle and looked at are you actually investing in or are you actually learning. That’s really where they focused are you. Are you taking on projects or are you teaching your team are you hiring data scientists. Answer is pretty high yes. Most of what you’ll hear when you read about it is they were what would be called departmental or very scoped specific projects which is I’ll share with the things we worked on I think is the right move. The other interesting thing and you can certainly read this it’s pretty it’s a pretty interesting report. The other thing is that the investments tended to be focused on revenue opportunities not cost savings. And I think that’s a pretty exciting way to think about it and it’s exciting to see how businesses think about it. They’re actually looking at it to go after new markets so they’re going after it to acquire different customers or acquire customers they couldn’t in a more efficient way. Things like that or experiment with pricing models in a more efficient way things like that. And so I think that bodes well as a technology industry. So I would say you know my summary from that early days real value is being seen by companies big and small. They’re being applied to revenue generating and it’s happening across all different departments. Now that’s not to say that one company is going company wide with these massive A.I. endeavors. And I’ll talk about why I think that is we talked about data already but data is the biggest impediment to going wide and A.I. and it’s not just access to it but it’s the quality and accuracy of it.
So that was cool. So I’m going to switch gears now and talk about the journey that we went through. I talked about at Guru we went through a pretty big A.I. initiative and I’m going to talk about how we thought about that opportunity. What led us to focus on what area within our product and within A.I. and like some things I think we took away after doing it. So the first thing there was a couple of things going on that led us to the path or on the first and something I mentioned before I’m super excited about is how the customer service industry is undergoing a big change right now. We all think of and historically have always thought of a customer service department as a cost center. It’s a necessary cost to solve customer issues, turn them and burn them, get those tickets done, get the customer off the phone as quick as you can, deflect them any of them away from you as you can. And it’s all about cost savings and what’s happening is companies are realizing and I see more and more of them every day and they blow my mind with how they think about it. They are changing that belief and they are turning their customer service org into a revenue center. And the way that they’re doing that is they’re tying their operational metrics to revenue outcomes. So instead of going how many tickets do you churn through in an hour they go How many support interactions led to a customer converting from free to paid service. Instead of going what’s the average handle time of a ticket and how do we reduce that as much as we can, they’ll say how many open ended questions did you ask your customer while you’re on the phone with them? And I’ll give a great example one of our customers is Shopify that has just been an amazing example of this. And what I truly view as a visionary and thinking about customer service and I was at an event they were hosting and they played a transcript of an agent on the phone with a customer and the customer called them and said ‘hey I need to change the theme of my store.’ Now the old way of thinking as a customer service agent that’s a two minute exercise right. No problem go here click that click your theme cool. That’s how it went down what the agents said instead was ‘How’s your store doing?’ and the guy said ‘Well that’s actually why I’m calling.’ Ha. Getting behind the real meaning of the call which wasn’t really to change the theme. It was to change the theme to see if that made their store perform better.
‘Oh really. Why is that?’
‘Well I don’t know I’ve had this going on and that’
I won’t get to the gist of it but at the end of the conversation – which by the way went on minutes and minutes and minutes – that merchant ended up tripling their business over the over the next month. So, what started as a call to change a theme turned into changing the way that that store owner actually sold. And that’s the point right. That’s the point. And every dollar that that merchant sold. What was it was a revenue kickback to Shopify as the underlying platform. That’s the kind of thinking that blows my mind as I see these companies living that every day
Despite that a lot of the A.I. conversation for customer support tends to be focused on the two buckets of things on the left: deflection and bots. Deflection as the word implies is deflecting the customer away from your customer service team. Think about that. Most of us operate subscription businesses which means that when we close the sale that’s the beginning of the customer relationship. In the old days perpetual software it was the end right. We got that big license and we moved on. Now it’s the beginning who has the most enduring relationship with that customer – Customer service teams. So to deflect away the opportunity for that relationship is a missed opportunity to generate further revenue yet that’s where the conversation gets applied. Bots is a big one that everybody talks about. And I don’t want to categorically take them apart. I have seen bots do a great job for example while I’m waiting to talk to you if you’re my agent. I’m waiting to talk to you. The bot will say hey while you’re waiting here’s a few things. Here’s an article that might help you. Cool. I’m waiting to talk to you. That’s good. It’s when the bot’s used to simulate the agent that creates a trust problem between the customer and the agent. So all the A.I. technology is being applied to cost savings measures. Right. It’s getting applied to that old way of thinking of Turn em an Burn em deflect them. Get those ticket volumes down reduce average handle time but no one’s actually focusing on the agents themselves. And so we saw that and we’re like ‘Man this really seems like a missed opportunity’. It really is all about the humans and the partnership between the A.I. technology and the humans. So that led to some thinking and focus on when we say coaching what we mean is how can we help an agent respond faster and more confidently to their customer when questions come up. And that’s how we think about the problem and that’s how we kind of got there. And then finally the data problem. So we also knew that to build a real A.I. system we needed to have training data to train that model to generate accurate predictions. And so we looked at our usage data. So we had the fortunate reality of good what’s called DAUMAU.
DAUMAU (Daily Active Users Monthly active Users) a metric that a lot of like consumer technologies use where it says okay of your monthly active users, how many of them are using your product every day? And we look at that relentlessly. And the reason why we look at that so much is we’re in a category where people are replacing things like Wikis with Guru. The number one problem they have is no one uses those things. And so customers are very focused on adoption and sustained adoption.
That’s a big thing and so we got that signal early days when we were figuring out Guru, focus a lot on that engagement data. But it’s actually not about the ratio itself but it’s about that actual usage data that we were seeing because what we were actually seeing was specific personas using the product. Now we’ve sold the customer service for a while now. But what we realized was the customer service agents of all our users were the ones using it the most. Their DAUMAU was like sixty five percent. It was even higher. So we saw higher usage pattern and then we could dig into that and go Well where are they. What are they doing. Why are they using it. You know I talked about how the knowledge you need to do your job should find you. You’re doing something else and you need knowledge to do that. So what are they doing. They’re solving a ticket in Zendesk or they’re chatting using live person or something like that and then they’re using knowledge from Guru to help them do that. So we looked at the data to see that the data revealed the A.I. project to us. So the combination of those things led us to going okay cool. This is what we’re going to do, we’re going to focus on something that recognizes when they’re using Guru and those contexts when they’re using tickets or chats or whatever. We’ll learn from that usage and then we’ll proactively suggest that that knowledge and so that that’s kind of how we got to that decision process of deciding you know what we were going to actually build. And that’s really what we today what we kind of call the loop you know what we call the core infrastructure is really just making it very simple to access and use Guru. That usage data trains grew to be more predictive to you. And this is all done in a very clear way as you kind of are setting it up and using it.
So that’s the project. So here are the things that I would say kind of coming out of it. We learned along the way like I said this may this may be helpful as you’re thinking about or refining you know A.I. initiatives. And I have plenty of time for questions by the way. And I’m happy to dig into any of that. So the first one A.I.’s only good as the data can learn from so I’ve talked about data a few times what I want to focus on here though and why I said jack of all trades is it can be tempting to go wide with the data set. What I mean by wide is we could have said oh well we’re going to look at the engineering engineers who use Guru and the sales reps or use Guru and the H.R. professionals who use Guru and the service professionals. We can try to match all that together and I think the learning was the suggestion and the accuracy quality is really poor because those are four distinct use cases they’re all doing different things. They’re not working. Only one of them is actually working on sport tickets. The others are writing code. Or filling out H.R. policies. They’re very different things. And so being very very focused on that very narrow dataset one of our investors emergence capital their thesis based investor and they talk about coaching networks and one of the things they talk about here – and there’s a lot on the slide but what I what I really want to draw your attention to are these quadrants and the way they’re labelled. So on the bottom is non-proprietary data on the top is proprietary on the right is creation and on the left is aggregation.
So for example on the bottom left an aggregation of non-proprietary data would mean that today you could go out and scrape a whole bunch of publicly accessible data like Twitter streams and use that to build an A.I. system. The point that they’re making with that is you’re not generating a data asset that is proprietary to your business. Your defence ability is nil. You’re doing an arms race against the next smartest data scientist with that approach on the top right is the opposite extreme of that which is a proprietary set of training data which means data that you uniquely own in your application and that that data is getting created by how your users use your system. And when you have that going you’re truly creating something that is a company that you can sustain the incumbents because let’s face it we all know that the big technology companies are all hiring the best data scientists in the world as fast as they possibly can. So you’re not going to win with a better algorithm, you win with the proprietary data set. That’s unique to you because you can copy algorithms you can’t copy that proprietary dataset. So focusing on the data is so critical. OK.
So in that hype slide I had you know people explaining vectors and word graphs and all that stuff. That doesn’t mean anything too to a business owner right. They’re looking at an outcome an outcome for a customer service owner is How does my team contribute to our company’s revenue. That could be an outcome. How do I train my new hires faster so they can start working with customers faster. That’s an outcome. How do I reduce the time it takes to close new deals. That’s an outcome. And A.I. such a technical concept and it’s such so driven by technologists that it can be easy to get caught up in that. And we spent a lot of time thinking about this at Guru and trying to focus on that and I’ll go back to customer service as a good example as a technologist. You can look at volumes of ticket data and you can be tempted to think oh I can have a machine do that I can have a machine just look at all the tickets that came in. Analyze them all and then just start automatically firing back answers as new tickets come in. And as a technology exercise yeah that actually checks a lot of boxes large volume of data it’s proprietary. You can build a lot of cool insights on that. The problem is should you actually do that. Should you have a machine be interacting with your customer not could you but should you.
There’s a really cool analogy so Forester talks about enforcer basically predicts that there’s going to be this rush to do what I’m talking about and a lot of companies are going to do this because they’re going to think about the cost savings opportunity of doing this and that sees that’s actually going to go down. That’s not going to go up and customers are actually going to be not happy right. When we come in and we need customer service we’re oftentimes not in the best of spirits right. We’re not happy. You all remember a really shitty customer service experience you had. I guarantee you and you probably had one in recent memory and you probably also remember an amazing white glove service experience you had too. And we’re compelled to write about those and talk to people about those were more compelled when they’re shitty unfortunately but when they’re really good we’ll tweet about those and we’ll talk about that. Let’s make more of those. Right. And the prediction here is that there’s going to be a rush to do that right. You might remember in the 80s when companies like Dell started outsourcing their call centers to foreign countries. But the problem was they weren’t teaching the folks they were outsourcing to the language skills they needed to properly go through the issues. They looked at it from a monocular lens of cost savings. And what happened right. You can read the how that all went. Outsourcing is a huge industry today but it’s fundamentally different than the way it was back then. I think there’s a very similar path that will happen here.
Over-invest in UX
Three I talked about UX in the beginning I talked about the enterprise user experience the idea that we have the same expectation of the experience with the tools we use at our jobs as we do the things we use at home or at least that that’s narrowing our expectations are going up UX matters more than ever. But the reason in this context that it matters more than ever is a good user experience an intuitive simple to use UI generates not just a user who builds up a habit around your product but it generates that critical training data. And so when I talk about this saying connecting the importance that’s really what I mean is what we did. What we kind of figured out a Guru. I use that the DAUMAU stuff is we connected the success of our A.I. project to the UX of how the person would interact with the system and then we did not treat those as independent systems. They were viewed as one thing and the team that actually worked on this was one team that was engineers data scientist UX all working together and I think looking back that was that was probably the right way to think about it. Our design team was very happy when I put this slide in the deck too they were like ‘hell yeah’!
Close the AI loop
So closing the loop. So I talked about this idea of training the A.I. system and how what happens is data gets fed into an algorithm and that produces what’s called a model and the model is how you predict things you can feed this system new information that it had not seen before and it will predict with a certain level of likelihood what to do next. Closing the loop is how you actually refine that and evolve it over time it can’t be static and one interesting way to think about this and most of the A.I. that we deal with today is consumer driven things and those consumer driven things -Meaning not enterprise software – Those consumer driven things are capturing a lot of data but we’re not getting anything out of that. You know what we get out of that ads. Right. And so ad tech is it been at the forefront of much innovation around A.I. the problem is there’s no loop. We don’t get anything back. We give a lot: of the things we buy, the websites we visit, the places we go. We give we give and we just get ads back. That’s sad. In this world, hopefully that doesn’t happen right in this world. It needs to be a loop. And so again I’ll go back to our friends at emergence that have done I think a brilliant job describing this but the way to think about this again they have a very you know humanity first approach to how they think about A.I. and I very much agree with that is capturing the brilliant outlier capturing the creativity, capturing the new knowledge the new information the new thing from enough people that it goes back into the model and trains that system and makes the general output better. And it’s humans in the loop from the entire process in order for that to work. You have to close the loop. So closing the loop very specifically all I mean by that is in your application you have to know if the suggestion you put in front of the person was right or not. You have to see how they react and that can be done implicitly or explicitly. In our example when we put a suggestion in front of you there’s a ticket open they’re asking a question. We say hey we think this FAQ might help with this question right now. We know if you actually used that if you read it or if you copied it we know that. And so that’s implicit usage. We also have a way where you can give us feedback like that was a shitty suggestion your algorithm needs work. You can give us a thumbs down and you can tell us that. That’s what I mean by closing the loop. It’s capturing the result of that. But this idea of human in the loop I think is a really exciting way to demonstrate that idea
A.I. Is hard
Five, I have one more. Your model can start out dumb. So again we’re simulating human intelligence here using advanced math. To oversimplify it. The mechanics of building a true A.I. system that learns and evolves is really hard. Having gone through it with our team it’s very hard to do because it does take the full end to end discipline I talked about from the UX all the way through to the ops at the end of actually generating that model and having it scale half of what makes it hard is that actual machinery that I just described. It’s different than software engineering right because software engineering is static it doesn’t change and evolve. We hard code things to do specific things we deploy that to production and they do it. This is different. This is you deploy something that doesn’t actually know exactly what it’s supposed to do and you train it with user behavior over time; it’s different. And the specific part about that that’s so hard is getting that mechanism in place of a user takes an action that trains an algorithm which feeds a model which generates a suggestion which you do something with. So just getting that right is really hard. Then you could spend quite literally forever perfecting it to be the smartest thing ever in the world. The biggest learning I think we had here is you don’t you don’t actually have to do that. I think what’s much much better and puts you in a much better position of the ability to move fast with your customer is to get that machinery I was describing in place first. Evolving it with them using the data they’re going to use with their own teams to train it. That’s something you can do in partnership with them. You can evolve the model, you can set you can iterate it you can make it better you can improve it. That’s so much easier to do once you have that foundational stuff in place so in many ways the way we talk about it now inside of Guru is as we build new projects and generate new models we want to make them dumb at first we don’t want them we’d love for them to be smart but we’re okay if they’re dumb at first because what we really are trying to get in place is our best guess but that that machinery to be able to evolve it from there. And that can be a little bit freeing when you actually do accept this idea because it is tempting and you have you know what you’re working with data scientists who are very intelligent highly educated academically focused they probably been in academics more than they’ve been in a professional work environment and a lot of cases that will sort of seek and desire that close to perfection but great when you can free that thinking and go you know let’s iterate. And so a dumb model can be can be a great way to do that.
A.I. Should empower, not replace
And then finally and I think this is just so important for so many reasons. Think about A.I. projects to empower people versus replacing. There’s so much talk now about automating automating automating. One I think it’s just doing a disservice to the to the industry. I think it’s doing because I don’t think it’s right. I don’t think you will actually be to automate away entire jobs anytime soon but I think there’s absolutely opportunities to partner it’s a partnership. And in that partnership allow humans to be better human things like empathy and emotion and allow machines to be better at machine things like math.
So quick recap. And then we’ve got some time for questions I think. Yeah. I do believe A.I.’s transformational. I love comparing it to cloud computing because I think a lot of that same height behavior is happening all over again and it is the same pattern. And I think Gartner does a nice job of drawing that trough of disillusionment while the hypes use there are real gains to be achieved today. Again I think those real gains are starting on small focused things. I was having this conversation earlier today and the person I was talking to said Yeah. Like when I actually look under the hood and look at a real A.I. thing it’s like boring. I’m like Yeah it is because it’s not you we have Hollywood depicting it as one thing and this glamorous terrifying amazing thing and then we have the reality which is like Oh yeah. Automatically tagged a support ticket. WOW! Mind-blowing. And that’s like that’s like the problem. Like if we can tone it down a little bit focus on the outcomes focus some point things that can drive real improvements in the way humans work. We’ll be on a good path. And then finally. Instead of thinking about how do we automate away stuff humans do what if it was the A.I. itself that actually helped us grow and learn. And that telegraph I showed what if it was A.I. itself that actually collapsed that better and got us connected with that rate of technology change.
Cool. Thank you guys I’m happy to take any questions.
Mark Littlewood: Okay as usual you stick your hands up and we can get some mics out there so we’ll start with Mark and then we’ll come down to Glenn.
Audience Member: Thanks. Thanks for a great talk Rick and I don’t think you’ll be replaced by computer in the next 18 months or so. Quick question on your thoughts on A.I. how much do you think it is essentially 1950s neural net technology finally getting fast enough boxes and enough data and how much do you think it is generally something that’s new and revolutionary in the thinking space?
Rick Nucci: I think if I were to take this the A.I. spring today it’s more about the former it’s more of the fact that we can just compute a tonne more things. I don’t think we have yet figured out how to do that in an efficient way that we’ll need to do right you’re seeing the new iPhones just getting the first real chip that’s designed with a GPU chipset to like do the types of computations I’m only to do that just starting for the first time. So now I think the spring today is much more about those three things data storage, computing, UX which arguably are good hygiene things that you know have been around people been thinking about for a long time.
Audience Member: Hi. Great talk. I was really interested in what you had to say about the past. I was recently reading a very long book about Andrew Carnegie at the turn of the century and he was talking about how industrialization effectively displaced huge numbers of people at that particular time. And you had that graph where you said that you felt that the technology has now moved forward. Isn’t the technology or some technology from the printing press or whatever always sort of doing that. Or how. I mean people are often having this debate about you know computers are going to take over the universe and we’re all going to be screwed or whatever. Yeah yeah. And it the same things happened in this book. We’re still here. We all have a job. Say yes. Could you talk about that?
Rick Nucci: And I agree with that line of thinking. I think the one thing that’s different today is that the advancement of those technologies is happening much faster than it used to happen right if you think about the time we went from horse and buggy to car. Was well it was over one hundred years of time. So if you if you compare that to now and how quickly things change in turn over the real concern the real fear is that we have to continue to adapt and upscale ourselves. And thinking about jobs today that don’t have a whole lot of that. What makes humans human to do it like driving a car. And that’s why that’s such a topic is something that’s ripe for automation. And so I think the opportunity we have and the optimist in me and a lot of people who think that way is let’s get ahead of that and let’s focus on how we can evolve with this technology and how we can focus on developing skills that make us innately human and not spend our time on things where yeah those things will kind of get continue to get eaten away. To that point though I think the other part of the argument is like Yeah well with each new technology shift that happened as many new jobs were created as we’re taken away I very much agree to that and that’s why I talk about I think one goes with the other right. You have to reskill we have to push ourselves the way we think about institutional education changes over time it needs to it already has it continue and we’ll continue to with that change. There’s plenty of new career opportunities but I don’t think they’ll look the same. You know as they do today for sure.
Audience Member: Hi. Hi Rick. I’m sorry realize I’m going to ask the same questions everybody else just in a slightly different way which is how scared should we really be? Well all we really care about although on a different note if anybody seen the news today Amazon’s raised its minimum wage for its workforce to fifteen dollars an hour which is kind of it is a good sign is the other side that the human capital inside Amazon is being recognized and can’t just be swept under the you know the wave of the machines. Anyway my question is more to ask you on what side do you fall and I suspect it’s the latter which is that Elon Musk said that Mark Zuckerberg was essentially a bit thick in not understanding the threat of A.I. and I don’t think anybody really would agree with that. That Mark doesn’t understand it but clearly he has a different view. Why is Elon Musk wrong?
Rick Nucci: So I don’t think I don’t think he’s wrong. I think there’s a time context that’s missing. I think if you listen to you on must talk about this the infamous Joe Rogan podcast that many of you might have seen and heard where he did some things that he may now regret. He talks a lot about this exact topic and his point is it’s when not if. But he if you listen carefully to what he says he doesn’t say it’s in three years or five years. And that’s what gets lost in this conversation a lot is in the next five years in the next ten years. Yes the rate of change is non-linear. But you know machines aren’t writing novels machines aren’t even driving cars truly driving cars autonomously. If you look at what most experts think it’s human assisted still things think things like that. His point is much more. It is going to happen someday. And his point is governments take decades to regulate things. This is the thing that should be regulated. Let’s have the conversation now. So 30 years from now when there are real things artificial general intelligence happening we’re not being reactive we’re not waiting for people to die and then we suddenly regulate it. That’s his point. I think when you see people like Mark being dismissive they’re thinking more about like today in real time. And I think that’s at least that’s my takeaway is that it is two very different things right.
Audience Member: Rick thanks for being here. The thing that I’ve seen a lot of is IBM Watson partner with IBM Watson and they’ll be your platform or your back engine on A.I. and I’m curious what you think about you know companies like software companies partnering with some sort of an A.I. I don’t know the appropriate word be backbone or platform to actually leverage as part of either their product or their operations?
Rick Nucci: Yeah. So I’ll give you the answer for a software company and then I’ll give you the answer for an enterprise where software is in their primary line of business for a software company. You’re running a big risk of not having that proprietary data asset that protects you. You are in a lot of ways. Signing yourself up to get run over by someone because you’re not building something that’s ultimately defensible right. As a software industry we can copy almost everything else in the stack now. Right. We can see an app and we can clone it in a month and we all that UX actually worked so hard figuring out. Yep copy. And now I have that too. Right. So what’s left. Well what’s one of the things that’s left is that proprietary dataset when you do work with companies like that you have to be really careful and you look at the fine print they’re training a model using your data not just for you. They’re training that for their future use in future initiatives and projects. You have to be careful of that.
The Enterprise answer though is a little bit different. It’s fascinating. And when you read that M.I.T. report I was referencing earlier these enterprises are almost behaving like technology companies like they’re almost participating in the hype like the vendors are and they talk about the things they’re doing. But I think that’s a good thing. I think that’s exciting because a little kind of make us all maybe calm down a little bit and just speak maybe a little more plain text to each other but they’re doing these projects they’re working with those systems because what those things give you Watson, Google, Amazon, Microsoft all have them now. You don’t need to have data scientists on your team to deploy these models. And there’s a real blocking and tackling problem of you can’t find data scientists. It’s by far the most sought after technical skill. And so in those contexts and use cases it makes a tonne of sense because you can partner with them. They can do that work for you. They’re deploying those scientists. But you know it’s in any in any use case and context it’s so important to know what you’re signing up for. I talked about that A.I. as hard point you know that doesn’t go away when you partner with someone like this you’re getting a skill set and a competency but you’re signing up for a long development project for sure. So I think eyes open going in it can. It can be valuable but I personally feel that this should be a lot of caution when you think about that as a software company.
Audience Member: Been a great talk. Thanks. Uh um we’re building customer service software and we’re using A.I. to basically vectorise supports against knowledge based articles. And we faced a problem where how ethical would be to use one customer’s data to train the model like uh we have thousands of companies using our software. Right. And we have millions and millions of support tickets and stuff but we I mean we obviously talk to our lawyers. How is that. Are we allowed to use one customer’s data to train the whole model and then use the model predictions for other customers and they say totally no problem because it’s not personalizable right but still we have to get a customer’s consent for this and a lot of people are just so afraid that their data is going to be fed to some mysterious black box that will come back and kill them. So do you have that problem in Get Guru. What are your thoughts on this?
Rick Nucci: Yeah. The problem is that the big technology companies like Facebook have kind of ruined that for us because those are the stories we read about with misuse of data. And we have to be our view Guru’s view is transparency builds trust. And so I’d rather have you say no then do it and hope I don’t get caught. And then you find out later because you’re going to find you know you’re going to you will that will come back and get you. Besides it just being not the right thing to do. That said if you are prescriptive and upfront about it that will tip… it’s the same conversation we have a cloud computing right. Honestly is right then it was like Wait you’re going to put my CRM in your cloud with all my customer data and your cloud. Yeah right. You know and now that’s just second nature. We’ll get there. But I think the more we try to obfuscate what we’re doing with that training data the slower it’s going to be. And so yeah our view is be transparent be very upfront about it. Don’t buried away in terms of service and then even if they say no that’s still the better path.
Audience Member: Thank you very much for a wonderful talk. I’d be remiss if I didn’t take this opportunity to ask this question but do you have any advice for a data scientist who is building a coaching network for a company right now? And what are the things that I’m probably not focused on that could help my organization the most and causes the less pain as we turn this over to our customers?
Rick Nucci: and are you asking like white spaces where the coaching networks idea could still be applied that maybe hasn’t been done yet.
Audience Member: I’m pretty focused on getting access to the data that we use to train my algorithms. What are the things that I need to think about while I’m doing this. One of things I got out of your talk is to think about closing the loop and iterating on the stuff we’re doing focusing more on getting useful things out and then planning on reacting to the react the ways our customers are using it. Is there other things that I should be thinking about.
Rick Nucci: Yeah. Let me let me give one I’ll give you one quick example. So I showed this and I’ll use it I’ll use an example of another company that does a coaching network software called text you and what texting does is you write your job description and you put it in text you and it tells you what works to change like don’t put the word ninja in your job description because then a female is not going to apply to that job. Don’t use words like Rockstar. So it goes through and does that the way that it trains itself is it looks both at your job applications and how quickly the roles get filled. It also goes and scrapes monster job and look monster job postings all over the Internet that are publicly accessible so aggregation non proprietary bottom left there’s a ton of that data. And then when the job comes down if they can reasonably assume that that job was closed and so they combine that with the coaching network data on the top right. The reason they combine them together is the coaching network data alone isn’t enough for the model work they want to do. So by combining them together they’re still generating that proprietary data they’ve created but they’re also couldn’t connecting it with a publicly accessible data source anyone can get. So I think a lot about that. I don’t know that we figured out that opportunity for Guru but I think that kind of what could you augment your dataset with that could be public and could be bottom left quadrant. That’s fine, but when married together makes your outcome better. That would be one that I think would be worth maybe giving some thought to.
Mark Littlewood: Thank you very very much indeed. A.I. always safe in your hands. Thanks.
Learn how great companies are run
At BoS we run events and publish content that is highly valued by anyone trying to build, run, and scale a great software company.
Sign up for a regular dose of actionable and useful content: