Martha Heller
Columnist

Get AI in the hands of your employees

Tip
Feb 07, 20245 mins
Artificial IntelligenceCIOData Management

Stephen Franchetti, CIO of global IoT fleet management company Samsara, applies a “bottoms-up” approach to AI innovation.

Stephen Franchetti, CIO, Samsara
Credit: Samsara

When public access to the internet emerged in the late 1990s, CIOs were faced with a question: Do we allow our employees to search freely, or do we put restrictions on access while at work? We all know how that turned out. Restrictions soon lost the battle, and most employees now have open access to the internet.

With generative AI, we face a similar conundrum. Amazon and Apple, for example, are restricting employee use of ChatGPT, while others, like Ford and Walmart, are giving gen AI tools to their employees, with the goal of sparking employee innovation.

Stephen Franchetti, CIO of Samsara, a fleet management SaaS provider that went public in 2021, believes the only way to optimize your AI strategy (or any emerging technology strategy, in fact) is with a bottoms-up approach. “When generative AI exploded on the scene a year ago, Samsara started with a pretty restrictive approach because we didn’t understand the technology,” says Franchetti. “At that time, we were focused on putting in guardrails for privacy and security.” 

But after the team spent more time with the technology, they lifted those restrictions. “Our policy has evolved dramatically this year now that we recognize what generative AI brings to the table,” he says. “We want to get the technology as close to our knowledge workers and subject matter experts as we can. We want to give them those capabilities and allow them to experiment and create.”

Franchetti acknowledges that a KPI- and outcome-driven method is still appropriate for many technology rollouts, but “the organic approach is better for AI, so our deep software development subject matter experts can innovate without a targeted business outcome,” he says. “Of course, these technologies must integrate back into the larger architecture, but the IT team can help them with that.”

Having unleashed the employee base to experiment with generative AI, Franchetti is beginning to see the impact. “We’ve seen an ongoing iteration of experimentation with a number of promising pilots in production,” he says.

He’s also seeing positive AI proofs of concept in purpose-built tools for IT help desk, customer support, and sales and marketing. “We’re experimenting with general purpose co-pilots or assistants, too,” he says. “We released a couple of options for our employees to experiment with, one commercial LLM service and one that’s open source.” Samsara employees are applying these general-purpose assistants to a variety of use cases, like writing documentation and job descriptions, debugging code, or writing API endpoints.

By using LLM capabilities for code generation, for example, Samsara engineers are more productive in generating boilerplate code, as well as in code documentation and commenting, which is a critical practice for the company. “Some of our engineers don’t have English as their first language,” adds Franchetti, “so bringing AI to commenting and documentation helps them in their work.”

Having spent a year on this bottoms-up approach to AI innovation, Franchetti offers some advice:

Don’t limit “citizen creation” to engineers: At Samsara, Franchetti estimates that 50% of gen AI usage is by engineers, but the other half is in legal, sales, marketing, finance, and customer support.

Don’t let your current architecture hold you back: Franchetti acknowledges that companies like Samsara, that were born in the cloud, have a jump in gen AI over older companies running on legacy infrastructure. But that doesn’t mean they can’t enjoy the fruits of a bottoms-up approach. “I believe your employees can experiment regardless of your architecture,” he says. “They can improve productivity by using AI for the creation of marketing collateral or even finance reconciliation. They can do this in any environment, because these specific tools don’t rely on integration with the broader architecture.”

Clean up your enterprise data: Without clean data, your AI results will be limited. “The power of AI and gen AI comes from the ability to share context with the model, so the model can understand your environment and be fine-tuned to give you better answers,” Franchetti says. “AI starts as a novice about your business, but as it gets trained on your data, the tool becomes an expert.” When you have data in various systems, and conflicting sources of truth, the AI will not have the context needed to get smarter. 

Be selective about what you scale: With so much citizen creation afoot, CIOs need to develop a process of choosing which pilots to develop into an enterprise solution. To make sure you spend time and money on the solutions with the most potential, Franchetti suggests focusing on results. “When a tool gets to the point where we believe we’re onto something, we ask what measurable business outcomes it will achieve,” he says. “Will it improve customer satisfaction, and will it drive productivity and by how much?”

For example, the technology team at Samsara has spent the last few months experimenting with AI for the internal IT help desk. “We’ve deployed a technology backed by an LLM that allows us to provide a bot inside Slack that resolves help desk support cases,” he says. “Today, 35% of our IT support is fully automated. That’s a measurable improvement and frees our support engineers to focus on higher-order work.” With those outcomes in place, the team began experimenting with a similar LLM for customer service, which they predict will make customer support agents 20% more productive. “We’re now in the process of scaling and deploying it, because we’re able to measure it.”