Best practices for deploying AI within large organizations

3 min read

By Gonca Gursun

large-scale ai
Image source: 123RF (with modifications)

Developing AI products within large organizations is a completely different process than developing stand-alone solutions within start-ups or writing code for your Ph.D. work for that matter. There are various reasons why it is a different—and oftentimes more complex—process.

First and foremost, large organizations are large because they already have products/solutions in their domain of operation. For such organizations, AI is often a tool, an enabler for improving their existing products. Second, almost no team or department drives product development by itself. In large organizations, product development requires many teams to collaborate.

I have been developing autonomous driving solutions for the past three years in a giant automotive manufacturing company. Here are a few notes on my experience in building AI-based products.

Know the domain well or know someone who knows it well

Chances are, as an AI expert, you have been developing and optimizing generic models such as deep neural networks, which is great because, hey, you can apply them in any context, right? Well, as you move into the product space, you will soon realize that the performance of a product is assessed differently than the performance of a generic model. At the end of the day, a product is only as good as its sales.

For instance, if you are working in an automotive company, before you jump in with your autonomous driving solutions, first you need to educate yourself about the existing driving assistance products, their capabilities, pain points, data collection methods, customer requirements, release processes, even some legal regulations in various countries. The good news is in large organizations, there are many domain experts you can talk to gain such domain knowledge.

Start simple, very simple

Chances are what you are working on is not the first of its kind. Your organization already has a product for its customers, but it is looking for a more efficient and less costly way of developing it. Just because there is a new AI-based concept around it, they are not going to throw away the existing products and deploy the state-of-the-art AI solution from scratch.

Therefore, replacing a product, especially a complex engineering product, with the state of the art, pure data-based solutions almost never happens over the short term. However, you can still make your way into the existing development with your AI expertise. And the key is to start simple, very simple. Once you gain a good understanding of the domain as mentioned above, you can identify one or two problems to address with simple, off-the-shelf machine learning methods.

Think of this as your “minimum viable product” but in this case replace “product” with “ml method”. There are two upsides to doing so. First, you get to check whether an AI-based solution makes sense by playing with the data and formalizing the problem without putting too much effort into building complex models. Second, simple models are often easier to explain and therefore less intimidating to your fellow non-AI domain experts.

First build trust and then a transfer pipeline

So far, you’ve made the effort to understand the domain, showed some improvements in a few aspects of the product with your simple AI models and explained why your models make sense to your fellow domain engineers and business units. In a way, at this point, you’ve established trust with them. Use that trust to build continuous transfer pipelines. 

A transfer pipeline is an interface from your AI solution to the product’s simulation and test environment. In such test and simulation environments, the performance of the product is evaluated with a set of key performance indicators. And to your surprise, oftentimes, these indicators are different from the standard machine learning evaluation metrics. It would require another post to explain the interplay between the ML evaluation metrics and system performance indicators, but let me say that these indicators matter the most.

From there on, having a rapid integration pipeline into the test and simulation environment and evaluating with the system performance indicators will guide your model development. It will give you hints on what types of models and architectures make sense. In other words, it will structure your development process and save you from spending time and energy on building complex models with no clear benefits for the product.

Finally, as you might have already distilled from the points above, a successful deployment of AI solutions within a large organization requires a lot of good, effective communication with others. You will probably need to attend many meetings just to align around the product goals. For many AI developers it feels like an overhead, but it really is at the core and accepting that from the start makes the development process more enjoyable!

About the author

Gonca Gursun

Dr. Gonca Gursun is a research scientist/product owner at the Bosch Center of Artificial Intelligence. Gonca, with her team, develops artificial intelligence-based autonomous driving solutions. Gonca writes about AI technologies and products on Twitter and LinkedIn.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.