AutoML and the Rise of the Citizen Data Scientist

Scaling AI Marie Merveilleux du Vignaux

The rapid acceleration of AutoML has spurred the application of automation to the whole data-to-insights pipeline, from cleaning the data to tuning algorithms through feature selection and feature creation to operationalization. At this larger scale, paired with an increasing volume of data, AutoML is producing more insights in less time. How can this development help analysts rise to the next level of their career?

Enter: the citizen data scientist — an employee who is data- and business-savvy (and could be on the business side or an analyst) — but not a formally trained data scientist. If this sounds familiar (i.e., you're a citizen data scientist) read on to see how you can leverage AutoML to help your organization support accelerated data efforts.
five computers on a table

An Increase in Efficiency Leads to the Creation of a New Profile

At a very high level, AutoML is about using machine learning techniques to automatically do machine learning. Or in other words, it means automating the process of applying machine learning. Today, automated analytics can add efficiency to large swaths of the data pipeline, with the potential to impact the entire process and influence the structure of data teams long term.

The vision for the future of augmented analytics is one of complete (or nearly complete) automation, where one could feed a dataset and a target to an automated pipeline and get back cleaned data with engineered features, together with the best performing model on top.

This accelerated AI modeling produces more results faster thanks to the help of the citizen data scientist.

What Does This Mean for Data Scientists?

Citizen data scientists are data- and business-savvy employees — whether they are on the business side or some kind of data analyst — but not formally trained data scientists. They are not operating in dedicated data science or analytic roles, but instead use data science platforms to easily explore, analyze, and deploy data models. User-friendly centralized platforms (like Dataiku) enable these employees to access the data without any need for specialized technical training.

Citizen data scientists can collaborate with data scientists by doing less advanced work with AutoML, including data preparation, and leaving it to data scientists to refine and fine-tune projects before pushing them to production. A critical part of this equation is to empower citizen data scientists in smart ways. That doesn’t just mean allowing them to crank out models without proper training or understanding of the process such that those models are totally disconnected from the business questions they’re trying to answer. In order to achieve transparency, citizen data scientists need to be equipped with the right tools.

What Kind of Tools Should Citizen Data Scientists Use?

Incorporating the work of non-data scientists into data projects in meaningful ways requires a fundamental shift in mindset around data tooling. By nature, these profiles generally don’t have the skills for advanced feature engineering, parameter optimization, algorithm comparison, etc. What they do bring to the table is intimate knowledge of the problems at hand and business questions that need to be answered.
toolsCitizen data scientists rely on end-to-end platforms that contain a powerful automated machine learning engine. With such tools, citizen data scientists can optimize and deploy models with minimal intervention.

Some important elements to consider when choosing which AutoML tool will accompany your citizen data scientist include:

  • Usability: The system should be easily usable by non-developers with minimal technical skill. Look for a system that supports augmented analytics by providing contextual help and explanation for different parts of the data process and a visual, code-free user interface.
  • Stability: Users without intimate knowledge of data storage technologies need to be able to execute augmented analytics using a system that can be reliably leveraged from one step of the data pipeline to another.
  • Transparency: It’s difficult to trust something you don’t understand. Therefore, the best tool will be one that gives an accurate description of algorithms used, explains why they were chosen, and provides the right level of knowledge necessary for citizen data scientists to trust outcomes and determine if they are right for the project at hand.
  • Adaptability: As data projects are often worked on by multiple individuals or various roles, the chosen tool should have adaptability options. For example, outputs should be able to be translated into Python code for full learning.
With automation in place through the right AutoML tool and a strong team of data scientists and citizen data scientists, organizations can manage more projects and scale AI across the enterprise.

You May Also Like

Explainable AI in Practice (In Plain English!)

Read More

Democratizing Access to AI: SLB and Deloitte

Read More

Secure and Scalable Enterprise AI: TitanML & the Dataiku LLM Mesh

Read More

Revolutionizing Renault: AI's Impact on Supply Chain Efficiency

Read More