Embracing an ML-first mindset helps startups accelerate time-to-market and build long-term competitiveness

Enterprise

Products You May Like

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Of the many fascinating insights I get from working with successful startups across the world, one particularly stands out: machine learning (ML) and artificial intelligence (AI) are no longer aspirational technologies. I’m not alone in that notion. IDC predicts that by 2024 global spending on AI and cognitive technologies will exceed $110 billion, and Gartner forecasts that by the end of 2024, 75 percent of enterprises will shift from piloting to operationalizing AI.

Born in the cloud, most startups have the advantage to kickstart their “digital transformation” journey with less technical debt earlier in their life. They can come right out of the gate enabling a culture of innovation and acceleration by taking advantage of ML applied to what could soon become vast quantities of data to make accurate forecasts, improve their decision-making process, and deliver value to customers quickly.

In fact, startups are uniquely positioned to take advantage of scalable compute power and open-source ML libraries to create never-before-seen businesses focused on automation, efficiency, predictive power, and actionable insights. For instance, AWS collaborated with Hugging Face, a leading open-source provider of natural language processing (NLP) models known as Transformers, to create Hugging Face AWS Deep Learning Containers (DLCs), which provide data scientists and ML developers with a fully managed experience for building, training, and deploying state-of-the-art NLP models on Amazon SageMaker. Data scientists and developers globally can now take advantage of these open-source ML models to deploy and fine-tune pre-trained models, reducing the time it takes to set up and use these NLP models from weeks to minutes.

This shift towards ML-driven efficiency is changing the way founders and creators think about getting their products and services to market. The drive to accelerate the pace of innovation through ML is fueled by access to open-source deep learning frameworks, growing availability of data, accessibility to cutting-edge research findings, and the cost-effectiveness of using the cloud to manage, deploy and distribute workloads.

My advice to founders and builders is that now is the time to build an “ML-first” business, integrating ML from day one, whether they build their own ML models or leverage AI solutions that use pre-trained models. Startups that are ML-first will be in the best position to take what we call a “Day One approach” – being customer-obsessed, focused on results over process, and agile enough to embrace external trends quickly. Getting it right the first time is less important, as experimentation and risk-taking are the root of all product growth. With that in mind, here are four ways for startups to build and grow a strategic ML-first business:

Choose to be ML-first

One proven leadership principle for getting to market quickly that startups should embrace is:

Bias for action. Speed matters in business and many decisions and actions are reversible and do not need extensive study.

Bias for action and a Day One culture of quick experimentation, rapid prototyping, and failing fast to learn and iterate will help bolster go-to-market strategies for cloud-native startups by:

  • Enabling an extremely tight and actionable feedback loop — with customers, a cloud provider, and key stakeholders,
  • Automating ML operations for improved efficiencies,
  • Identifying and exploiting core IP to launch models, products, and features quickly.

Since speed in business matters, using ML to innovate and increase agility matters too. This includes having the right ML tools to automate the process of running parallel and distributed training jobs or to manage multiple ML model experiments. ML-driven automation eliminates the cost and time of having to manually sift through large repositories of data, logs, and traces to identify and fix errors which can ultimately slow down engineering velocity. Additionally, ML can create predictions and allow for planning around those predictions, leading to organizations not only knowing the course of action they should take, but being able to act on them more quickly.

Another key factor determining business success for startups is pattern matching in huge quantities of data – ML can speed up the process of finding patterns in large amounts of data quickly where it could take several humans years to analyze all data generated and stored. For example, BlackThorn Therapeutics (now part of Neumora Therapeutics), a clinical-stage neurobehavioral health company, has built a platform that can quickly iterate and get new treatments to market by rapidly collecting and analyzing multi-model psychiatric data at scale. In early discovery and pre-clinical research, scientists need access to extensive computing power to perform tasks such as computational simulations or large-scale analyses. BlackThorn applies its data-driven insights to direct its drug candidates to neurobiologically defined patient populations most likely to respond to therapies. To make this happen, BlackThorn takes advantage of cloud-based ML that scales up during peak demand periods and scales back down again when demand has decreased, so analyses and experiments can run in parallel instead of one-off trials.

Plan to evolve your ML models

To build on the benefits of being ML-first, organizations can’t stop at just having the right ML models and tools. After all, ML isn’t a one-and-done event – but an iterative process. Once a prototyped ML model is created, it must be easily accessible to developers and data scientists to work effectively. This includes data processing, training the model with the right data, and deploying the model in a scalable way. One of the biggest mistakes startups make is to deploy ML models with no plan to monitor and update them. Having a data strategy in place where startups are constantly collecting new data to feed the ML models, retrain datasets, and always ask, “Is this the best model for the job, and are my customers reaping its value?” is imperative. Continually monitoring model predictions is equally crucial so the model doesn’t experience “concept drift” and become biased toward certain outcomes as the real-world changes and generates new data.

It all comes down to agility, and to dynamically evolve ML models, developers must be able to remove inefficiencies and leverage automation to apply the best available components. They must also lean into modularity to have greater flexibility, and use orchestration to automate and manage workflows. This will free up developer time to work on key business problems and save the expense of sourcing specialized talent to build and maintain complex ML pipelines so that they function optimally over time.

For instance, iFood, a leading Latin American food delivery service processes 39 million monthly orders from 220,000 restaurants registered in more than 1,000 cities. The challenge with food delivery services is that route optimization and new food/menu items are dynamic and changing, so models must be updated as well.

To address this issue, the company used ML services to create automated ML workflows that scale to improve logistics and operations and automate decision-making, with growing and continually changing demand. ML has enabled iFood to implement route optimization for food delivery personnel that reduced delivery route distance traveled by 12 percent and reduced idle time for operators by 50 percent. With the help of ML automation, their business has increased delivery SLA performance from 80 percent to 95 percent.

Identify your core IP and leverage the power of open-source

Another common problem startups run into is going to market without identifying and distinguishing their core problem and the IP in their solution that solves that problem. With that comes a blind spot to the non-IP part of their stack and the cloud technology they can leverage. This is why startups aren’t building their own data centers, databases, and analytics software. It makes no sense for them to build everything from scratch because overly proprietary platforms can become quickly paralyzed when trying to integrate and scale. To maintain a durable competitive advantage, startups should capitalize on their unique value proposition and must identify their “moat” – the IP that is differentiated and unique at the center of the product and is difficult to copy. That’s why one of the biggest questions I ask my startups is, “where will you build to differentiate and where will you buy to move fast?”

Another successful trend we’ve observed is startups taking an open-source approach and actively contributing parts of their codebase to the open-source community with the goal to solve broader industry problems. The way successful startups do it is that they always have something proprietary to offer in tandem with that open-source code; typically, in the form of advanced versions of the product or executional capabilities that are difficult to mimic.

For example, Seattle-based OctoML built its deep learning model acceleration platform on the open-source framework Apache TVM, an ML stack created by the company’s founders to enable access to high-performance ML anywhere, for everyone. The company, along with a vibrant open-source ML community, is solving a big industry problem: the lack of broad accessibility to technologies that can deploy ML models across any hardware endpoint and cloud provider. Today, OctoML provides a flexible, ML-based automation layer for acceleration that runs on top of a variety of hardware that run machine learning models at the edge and in the cloud, including GPUs, CPUs, and ML-optimized instances. This allows ML developers to get trained models deployed to production across various hardware endpoints faster without having to sacrifice performance. Further, fostering more open-source ML tools will fuel more R&D and a much greater diversity of ML options.

Prioritize business goals, collaborate with strategic business relationships and be ML-first

Startups should embrace the wisdom of “there’s no compression algorithm for experience.” They should stay focused on their business goals and lean on strategic business relationships (from startup advisors, to venture capitalists, to customers) to help fill capacity and capability gaps and provide guidance and marketplace access.

There’s a multiplier effect. Strategic business relationships can not only provide access to early R&D, private beta and insights on enterprise adoption drivers based on years of experience, but also strong go-to-market support through partnership and co-marketing. These connections help startups learn more about the most pressing problems enterprises are looking to solve, and the trends they are seeing across their industries.

It’s also important to know what technology is coming next. These relationships enable startups to be nimble, move fast, scale as they need, and at the same time, think long-term about their roadmaps and customer experiences.

Building a startup isn’t just hard – it’s a crash course in humility. Succeeding over the long term is even harder. Startups are nimble by nature, and their tech stack should reflect that agility. By instilling rapid deployment and experimentation into their entire development processes, they can better position themselves to enter the market and scale competitively, with the right solutions and the right strategy.

It comes down to focus, agility, and velocity: being ML-first, identifying core IP, and building strategic business relationships will help fast track growth and build staying power in the marketplace.

And whatever you build next, let me know about it.

Allie Miller is Global Head of ML Business Development, Startups and Venture Capital at AWS

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Author

Topics

Products You May Like

Leave a Reply

Your email address will not be published. Required fields are marked *