The DataOps role is unique in the space of data analytics, with its goal to enable data engineers, scientists, analysts and governance to own the pipelines that run the assembly process. Essentially, DataOps engineers work on, but not in, these pipelines, according to a DataKitchen webinar titled “A Day in the Life of a DataOps Engineer.

“We want to run our value pipeline, like Toyota makes changes. We also want to be able to change that pipeline, take a piece of it, change it, and be able to iterate quickly and change our pipelines as fast Silicon Valley companies do on their websites,” said Christopher Bergh, the CEO and “head chef” at DataKitchen.

The space of DataOps combines Agile development, DevOps, and statistical process controls and applies them to data analytics. 

However, the current challenges in organizations stem from the fact that people don’t all have the mindset that their job is to deliver value to the end user, since they’re so focused on their immediate task at hand. 

“The challenge is that in a lot of ways, the DataOps role to a lot of people that do data engineering and data science isn’t there. It’s not apparent. So if they’re going to build something like a bunch of SQL or a new Jupyter Notebook they kind of throw it to production and say I’ve done my work. My definition of done as a data engineer is, it worked for me. A lot of time the challenges for people who are doing data analytics is they focus on their little part and think the process of putting it into production is someone else’s problem,” Bergh said. “It’s very task-focused and not value-focused. Done should mean it’s in production.”

DataOps engineering is about collaboration through shared abstraction, whether that’s putting nuggets of code into pipelines, creating tests, running the factory, automating deployments, and working across different groups of people in the organization. It’s then about automating many tasks. “DataOps engineering is about trying to take these invisible processes, pull them forward and make them visible through a shared abstraction and then automate them,” Bergh said. 

The challenge when it comes to automation, similar to many other scenarios, is that no one fully owns the process. This is where DataOps engineers come in. 

“While implementing a DataOps solution, we make sure that the pipeline has enough of a variety of automated tests to ensure data quality and to leave time for more innovation and reduce the stress as well as fear of failure,” said Charles Bloche, a data engineering director at DataKitchen.

In effect, every error leads to a new automated test that then improves systems. It is also the DataOps engineers’ role to test every step of the way to catch more errors to recover faster, and to also empower collaboration and re-use. 

“For a data warehouse, the product is the dataset; for an analyst the product is the analysis, and for a DataOps engineer, the product is an effective, repeatable process,” Bloche said. “We are less focused on the next deadline, we’re focused on creating a process that works every time. A DataOps engineer runs toward error, because error is the key to the feedback loop that makes complex processes reliable. Errors are data.”