Unlock key strategies to mastering AI compliance on June 25th 🔓

Register now


AI trends and tech with TechCrunch’s Ron Miller

Alex Ross

November 28, 20236 minutes

podcast cover image for the AI trends episode

Is the regulation of artificial intelligence technologies stifling innovation, or is it the key to a responsible tech future? How are established incumbents and newer players like OpenAI and Anthropic shaping the AI and machine learning landscape?

Special guest Ron Miller, Enterprise Reporter at TechCrunch, provides insights to these questions and more AI industry issues.

Artificial intelligence as a platform and a market

As use of AI grows more popular, the question of AI as a platform versus AI as a market remains.

A good analogy for describing AI as a platform is the cloud infrastructure market. With the cloud, you have people who are making foundation models as well as larger incumbents that are able to go buy from cloud infrastructure vendors rather than making their own models. 

“That’s the kind of platform that most people plug into. People are going to be plugging into other people’s large language models because it is not going to make a lot of sense for most companies to try and build their own when they do not have the resources, the data, and when it is just going to be too expensive and too time-consuming,” says Miller.

On the other hand, AI as a market is just like the software market. For example a company may sell CRM, ERP, or any kind of software that other companies consume, and that is how they make their money. The same is true of the market for selling AI models. 

“The question I explored was, is this going to be something that is more of the former or the latter, and I have to say, even though it’s a bit of a cop out, it is both. But if I had to choose one, I would say it is probably more platform because AI is going to be part of everything going forward,” says Miller.

Just take a look around. This year, many major enterprise software vendors have announced or released the addition of generative AI capabilities to their platforms. It is predicted that many more companies and platforms will be jumping on this bandwagon, but will that lead to more or less excitement over AI tools?

For the time being, it is more likely that the hype around AI adoption, implementation, and development will remain strong rather than calm down.

Although a major catalyst for this excitement was the recent release of ChatGPT, AI has been around for decades, just more in the background for things like generating reports or data processing. 

“When ChatGPT came along, and it was something that you could see, it was really almost magical. I mean, you can ask it a question, and it gives you an answer. You can describe a picture, and it makes the picture. You can say I need this transformed into SQL, and it does it. That’s pretty amazing,” Miller says.

However, with all that amazing comes downsides, such as hallucination problems and AI bias within the models. Hype around AI systems can lead to these and more concerns, including intellectual property issues and biases that need to be addressed.

AI regulation and its impact on innovation

While companies are very excited about AI trends, they are also cautious of issues that may arise for legal and compliance departments. This sense of caution extends beyond just the companies implementing AI systems within their platforms all the way to the government level. The presidential administration even issued an executive order to set standards around AI safety and security.

Is this cautionary approach a net positive, or will it inevitably stifle AI innovations? There may not be a clear answer to this question.

“It could be both. It depends on what happens. There are two major questions from a regulatory perspective: How do we get in front of and regulate this thing that we don’t know that much about and that’s moving very quickly? And how do you regulate it in a way that you don’t stifle innovation but also don’t let something that’s unsafe into the public sphere?” Miller says.

It has already been proven that the government has a hard time regulating technology. Just think about how long it took to really start regulating sites like Facebook and Google, and then think about how those efforts are going.

Establishing and enforcing regulations in the online space is a tough job to have, especially when things move as quickly as they are with AI. 

“If you are creating this technology, and you have all of these different governments and regulators coming up with different ideas of what regulation should look like, it is going to be very difficult from a production perspective. It is a hard line for any government or regulatory body to walk, and it is hard for the companies that have to deal with the myriad set of regulations when everything is sort of happening all willy-nilly,” Miller says.

Furthermore, big players in the AI space are being invited to the table to discuss these regulations, which begs the question — are they going to propose ideas that are fair to the market as a whole or that benefit them while subtly discouraging smaller startups?

And what about global regulations and cooperation? Coming to an agreement regarding safety and security around AI is made harder by the fact that countries around the world are racing one another toward advancement. (Explore more about creating trustworthy AI.)

Think back to when people tried to create OpenStack for the cloud. The companies that controlled the cloud market are like black boxes, and people like researchers just wanted to understand how the system worked but could not do so at a very minute level.

“We face a similar challenge with AI and an even more critical one. If you have these AI models where you can’t understand what’s happening, what data was used to create the model, and all of the factors that they added into it, it makes it hard to regulate and to understand as a customer. More open models have a better chance of at least having some level of understanding even if you don’t understand everything,” says Miller.

However, it is worth noting that there are some people who think that open models could lead to more problems.

That being said, how exactly are enterprises engaging with AI tools and model training? Are there more companies gravitating toward deep learning model training or simply using off-the-shelf models?

It truly depends on the company. Smaller companies are plugging into Open AI API and doing it that way. Other larger companies are building their own large language model (often using real-world scenarios with customer interactions) and allowing flexibility in how they use their customer data.

“The larger the company, obviously, the more options they can offer, but there are companies who are plugging into a single model as well as companies who are trying to keep their options open by not committing to any particular model because it is a moving target. There are very few companies, I think, that are putting their own model out there because it’s just so expensive,” Miller says.

It’s still so early in the game for AI that competition is at the forefront of implementation and development.

Companies are competing with each other in a race toward an undefined finish line or new progress milestones as well as competing against one another to gain consumers to use their platforms.

All the while, they are trying to keep doors to new opportunities open in that same spirit of competition, hoping they do not lose out on or miss an opening to get ahead. 

Watch the full interview for more of Ron’s views on artificial intelligence, machine learning, and how business leaders can take advantage of future trends.

Check out more insights in our webinars, and subscribe to Generation AI for more stories like this!