Navigating the Murky Waters of AI in Support: Insights from Craig Crisler

Meet Craig Crisler, the CEO of SupportNinja, specializes in global customer experience outsourcing, primarily catering to startups and innovative tech companies. Before founding SupportNinja, Craig collaborated with Scribe Sense, contributing to the development of a large language model focused on handwriting recognition, showcasing his deep involvement in cutting-edge technology solutions.

 

Check out this video featuring Craig Crisler our upcoming speaker for October’s Expo where he discusses the importance of navigating the legal and ethical aspects of AI in support, highlighting the challenges and potential pitfalls in using AI tools while protecting client data and ensuring compliance with evolving regulations. Get a sneak peek of his compelling ideas before he takes the stage in Las Vegas, NV.

 
 

Andrea Silas: This is a conversation with Craig Crisler, who is speaking at the upcoming Support Driven Expo to share their connection to AI.

This is going to be a very interesting track with a lot of new information and discussion. I am Andrea Silas. I am the Vice President of Technical Support at DreamHost. We are a web hosting company and I have been part of Support Driven for many years.

I'm very interested about this because I'm also presenting about AI and how they relate to how we work in support later at another conference. Craig, why do you think this topic matters?

Craig Crisler: So I am CEO of SupportNinja, so we do customer experience outsourcing for folks all across the world, multiple different companies, mostly in the startup, innovative tech realm. I'm a startup guy, and prior to starting SupportNinja, I was collaborating with and working with a company called Scribe Sense and Scribe Sense had built for lack of a better term, an LLM, a large language model around handwriting recognition. Basically it took tens of thousands of different stages of handwriting and was able to basically translate. It was OCR on steroids, AI enabled OCR.

 So AI has been around a bit, but when Gen AI started coming around, I started to get really intrigued by what was happening. Part of my job at Support Ninja is to protect my clients and the customers that they work with and AI can potentially have some problems with it in terms of security and a few other things around it and just making sure we understand how and what ways to be used Gen AI in a meaningful way while still protecting ourselves and protecting our clients and protecting our customers and their data. And so that started leading me down the path of looking at AI and the legality of it all. A sense of how do we keep everyone safe, but still be innovative?

But also too, it is a little bit of a wild west from an AI law perspective. And so that's why it matters because it's a little bit of a moving target to be brutally honest.

Andrea Silas: Like I mentioned earlier, it's new, but it isn't, but it's really ramping up right now.

You already described how you're related to the topic and why it's important. Why do you care about this topic so much?

Craig Crisler: There's kind of two aspects of it. So one is the large language models and the AI around it. Gen AI is probably the most tangible aspect of accessibility with AI. So the large language models that kind of feed and make the ChatGPTs work and all of that. There's law around that. And the other aspect of it is law around data privacy and how and in what ways regulations will start to affect it.

And the LLMs is really intriguing because what's exciting about and what's really interesting about the law related to it is that those large language models is, for lack of a better term, the meat that makes the sausage, right?

And how and in what ways the regulations are starting to affect the LLMs flows down to what happens with data for individuals. Those discussions in the LLMs are really interesting in that there's starting to be some tightening against them because there's only three really good ones.

And even them are starting to be viewed a little bit as a monopoly, depending on who you talk to. So that becomes problematic if you are building something that you're using an AI tool on an individual level. So that's really important to talk about and walk through just to understand what's the basis of some of the underpinnings legally around the LLM models. But the the other aspect of it I think is the individual stuff. And this is where it gets really intriguing. So we have an AI group and we do all this work with AI and all this other stuff.

But we had to put out some rules around the ways in which you integrate with it. Because it's really easy for someone to mistakenly put data into a model that is an automatic breach of security. If you were to put client information into a data model, an LLM, or ask ChatGPT about it, it will absorb the information.

And so automatically you're giving information that shouldn't be given. So it's not like you're talking to your buddy next to you and saying, Hey, how would I answer this question? If you actually use the AI tool and say, how would I answer this question was maybe it's a technical support question and it has very specific technical parameters that's only for that client. You're technically breach. That's where, for me, it became really important from the why aspect of it of being like how do we make this work in that setting and knowing that there will be regulations around its use coming that will ultimately prevent some of that from happening, but also will affect whether or not we can use it more importantly down the road.

Andrea Silas: So while we waiting for the regulation, is there something that your company is doing because of how you're looking at it from an ethical standpoint?

Craig Crisler: So one of the things that we're doing, is the EU does have some guidance in relationship to the use of generative AI and different scenarios and how and in what ways we can interact with it. But one of the big ones is we implemented, we call it the AI code of ethics.

And one of those things, in a real basic way is that if we're supposed to protect the data of our clients and our client's customers, we should protect it in the same way, whether we use AI or not, it's the same kind of protection we should keep in place. But it's some guidance that kind of points folks to specific ways in which we can use it.

And that is really important because it at least gives you a basis to start from to prevent, things like the Samsung breach, as an example. That was just an agent doing what they thought was right. And I don't blame them. I probably would have done the same thing if I didn't understand that it actually feeds the model and it is a breach. Then also we put some structural stuff in place.

We have virtual sandboxes created, so that way we can actually look at and use data with a model, but it will protect it from the the actual model absorbing that. And so that's one way, structurally you can do it until some of the guidance comes out.

Andrea Silas: And we have to think so quickly, and trying to figure out how we are taking care of these things because of how quickly things are happening around us. There's a lot of murkiness around it. And most people don't really understand what's happening.

We know we're learning to use it as well as we can, but we don't really think about the long term consequences or even to short term consequences and it's a good thing that your team does and understand and trying to create the boundaries before the regulations can run their course and happen.

Craig Crisler: I was actually thinking about titling the session being like, here's a way to at minimum avoid being that first major lawsuit. No one wants to be the first big one, at least. To your point, the murkiness of it all, it's so easy to potentially have a footfall, not realizing that you actually did it.

And it's so easy to do. Even when you start to look at what the US is doing in terms of regulation, the regulation is fairly vague at this point. There's guidance, but it's pretty vague. But it's vague because the tools are constantly changing. And so it has to have some inherent flexibility in it.

But the one thing that you hear consistently is keep personal information out of it. Those kinds of things that seem simple, but it's some of those things that it's taking what we, in the support realm, kind of view as standard practice, but applying the same concepts.

Andrea Silas: Of course regulations are great, but then there's also the concern, let's not impede progress. Let's not fall on our faces while we are trying to figure it out. I don't envy them, or us for that matter.

I'm sure a lot of people are interested in hearing more about this topic, it's really intriguing. Craig will be at our upcoming Support Driven Expo in October. Craig, if you could give us a couple ideas of how we can get a hold of you if we have more questions for you.

Craig Crisler: Absolutely. So if you want to hit me on the Support Driven Slack, I'm @Craig real easy. If you would like to reach out to me directly at Support Ninja, you can email me at craig@supportninja.Com. My door is always open for any questions or concerns or feedback or suggestions.

 I'm super excited to talk about this stuff. It's an exciting time to see how all this will play out. And I really am excited to present.

Andrea Silas: We're looking forward to it. Thank you, Craig.

Check out this video now featuring Craig Crisler our upcoming speaker for October’s Expo. Be sure to watch and get a taste of what's to come!

Previous
Previous

Uncovering the Source of Truth: A Deep Dive into Knowledge Management

Next
Next

Crafting Your Troubleshooting Brand: Insights from Elec Boothe