Get behind-the-scenes insights into the world of internal ML platforms and MLOps stack components with Piotr Niedźwiedź and Aurimas Griciūnas in their show, where together with ML platform professionals, they discuss design choices, best practices, and real-world solutions to MLOps challenges.
Brought to you by neptune.ai.
Building MLOps Capabilities at GitLab As a One-Person ML Platform Team With Eduardo Bonet
September 6, 2023 • 63 MIN
On this episode of the ML Platform Podcast, we have Eduardo Bonet as our guest. Eduardo shares insights on the Staff Incubation Engineer’s work at GitLab, the relationship between MLOps and DevOps, the future of metadata tracking, CI/CD and ML model training pipelines, foundational (LLM) model metadata management, and more.
Learnings From Building the ML Platform at Mailchimp With Mikiko Bazeley
August 9, 2023 • 109 MIN
On this episode of the ML Platform Podcast, we have Mikiko Bazeley as our guest. Mikiko shares insights on the ML Platform at Mailchimp, including success stories and golden paths, team structures, templating, generative AI use cases, feedback monitoring, and more.
Learnings From Building the ML Platform at Stitch Fix With Stefan Krawczyk
July 12, 2023 • 77 MIN
On this episode of the ML Platform Podcast, we have Stefan Krawczyk as our guest. Stefan shares insights on the ML Platform at Stitch Fix, including problems solved, success stories and golden paths, team structures and management, product management, feature requests, going open-source, and more.
Navigating Organizational Barriers by Doing MLOps with Leanne Kim Fitzpatrick
May 25, 2023 • 55 MIN
In this episode of MLOps Live, Stephen and Sabine speak with Leanne Fitzpatrick, Director of Data Science at Financial TImes. Leanne leads a team of data scientists and engineers who are responsible for building data-driven products and solutions to support the company's journalism, marketing, and subscription efforts. Prior to Financial Times, she held leadership roles at TalkTalk, Hello Soda, and Callcredit Information Group.
Tackling MLOps Challenges in Computer Vision with Marcin Tuszyński
May 9, 2023 • 49 MIN
In this episode of MLOps Live, Stephen and Sabine speak with Marcin Tuszyński, Data Scientist at ReSpo.Vision, a consulting firm that uses artificial intelligence to revolutionize football analysis. They discuss how AI and MLOps have revolutionized the way we look at football analytics and games, the ideal way to deal with large data sets, and the challenges that you are bound to face when building such a complex AI system.
What Does GPT-3 Mean For the Future of MLOps? with David Hershey
April 26, 2023 • 52 MIN
In this episode of MLOps Live, Stephen speaks with David Hershey, Vice President at Unusual Ventures, a seed-stage venture capital firm offering hands-on support and expertise to startups on their early-stage journey. David was also previously a senior solutions architect at TechTun. Before TechTun, he worked as a solutions engineer at Determined AI and as a product manager for the ML platform at Ford Motor Company.
They discuss the impact of GPT-3 on MLOps, explore how language models have made machine learning more accessible, and the challenges of monitoring large language models.
Managing Data and ML Teams to Deliver Value with Delina Ivanova
April 12, 2023 • 54 MIN
In this episode of MLOps Live, Stephen and Sabine speak with Delina Ivanova, Director of Analytics at Mistplay, a loyalty platform for mobile gamers. Delina was also previously the Associate Director of Data and Insights at HelloFresh. This episode is centered around managing data and machine learning teams to deliver value.
Leveraging MLOps Technologies and Principles at Non-ML Companies with Andreas Malekos and Ivan Chon-Hon Chan
March 29, 2023 • 50 MIN
In this episode of MLOps Live, Stephen and Sabine speak with Andreas Malekos, Head of Artificial Intelligence at Continuum Industries, and Ivan Chon-Hon Chan, AI Engineer at the same company. This week’s episode is centered around leveraging MLOps technologies and principles at non-ML companies. They delve into the chasm between their research science and engineering teams and how they solve it, the process of choosing the right MLOps tools, and how MLOps has significantly changed the structure of their teams.
Doing MLOps for Clinical Research Studies with Silas Bempong and Abhijit Ramesh
March 15, 2023 • 58 MIN
In this episode of MLOps Live, Stephen and Sabine speak with Silas Bempong and Abhijit Ramesh, both Machine Learning Research Engineers at Theta Tech AI. This week’s episode is centered around doing MLOps for Clinical Research Studies. They delve into how AI and clinical research are a match made in heaven, how Theta Tech understands the MLOps workflow, and the elephant in the room - generative AI.
Deploying Conversational AI Products to Production with Jason Flaks
March 1, 2023 • 56 MIN
In this episode of MLOps Live, Sabine and Stephen speak with Jason Flaks, Co-founder and CTO at Xembly. Jason gives insight into deploying conversational AI products in production environments. He shares his background in music composition, Math, and Science before transitioning into software engineering. They discuss the two-stage pipeline of speech recognition systems, with Jason explaining conversational AI as building technology and products that are capable of interacting with humans in an unbounded conversational domain space. They go on to examine the complexities involved in deploying conversational AI products, such as describing the space in a way that machines can easily identify it and implementing such items with any necessary nuances or requirements. Jason elaborates on their stack of models for enabling speech-to-text conversion and for identifying distinct speakers in a given conversation.
Implementing Vector Search Engines with Kacper Lukawski
February 15, 2023 • 53 MIN
In this episode of MLOps Live, Stephen and Sabine speak with Kacper Lukawski, a Developer Advocate at Qdrant. They discuss the applications vector search can be useful in; why vector search engines can be more useful than keyword search engines; and why teams should consider using it more often. They also delve into just how well vector search scales in production, as well as some of the best practices and tools for benchmarking vector search engines.
ML platform teams, features stores, versioning in data pipelines, and where MLOps extends DevOps with Aurimas Griciūnas and Piotr Niedźwiedź
February 1, 2023 • 55 MIN
In this episode of MLOps Live, Piotr is joined by Aurimas Griciūnas, a Senior Solutions Architect at neptune.ai. Aurimas elaborates on the distinctions between machine learning (ML) platforms and data platform teams and the advantages of automating an ML stack. He also went over the process normalization requirements for streamlined teams from data platform and ML platform groups. Aurimas explains why data platforms are primarily concerned with displaying results, whereas ML platforms are centered on the frameworks and tooling that machine learning professionals use on a daily basis.
Continuous MLOps Pipelines with Itay Ben Haim
January 18, 2023 • 53 MIN
In this episode of MLOps Live, Stephen is joined by Itay Ben Haim, a software team lead at Superwise. Itay discusses the advantages of automating an ML stack and how it can give data scientists more time to innovate and develop solutions that generate business value and provide businesses with faster deployment of machine learning-based solutions. They identified multiple maturity levels in organizations: level 0, level 1, and level 2 or 3. Moving from Level 0 to Level 1 requires new skills such as a DevOps background, configuration files, Kubernetes, workflow managers, metadata stores, and low-latency databases.
Setting Up MLOps at a Healthcare Startup with Vishnu Rachakonda
January 4, 2023 • 52 MIN
In this episode of MLOps Live, Stephen and Sabine are joined by Vishnu Rachakonda, Data Scientist at firsthand. Vishnu discusses setting up MLOps at a healthcare startup and how machine learning can help unlock the value of data to make people healthier and empower healthcare professionals. They explore DevOps principles, generative AI adoption, off-the-shelf models and responsible AI ethics.
Intersecting DevOps With the ML Lifecycle with Shirsha Ray Chaudhuri
December 21, 2022 • 49 MIN
In this episode of MLOps Live, Stephen is joined by Shirsha Ray C, Director of Engineering at TR Labs. Shirsha discusses standards and best practices for delivering ML solutions with DevOps and explores metrics to measure performance using AWS tooling, as well as data scientists' understanding of why ML models are used and their dream state for DevOps. They also discuss challenges faced in deploying ML systems and some best practices to bridge the gap between DevOps and ML teams.
Writing Clean, Production-Level ML Code with Laszlo Sragner
December 7, 2022 • 51 MIN
In this episode of MLOps Live, Stephen is joined by Laszlo Sragner, Founder at Hypergolic. Laszlo discusses how important clean code and architecture are in developing ML products and uses case studies to explain how teams optimize ML products with clean code refactoring. They also discuss clinical practices of clean code and architecture for developing ML products.
There has been an increase in machine learning, which has caused MLOps engineers to be faced with the challenges of cultivating a culture of implementing clean codes when developing ML products.
Laszlo sheds light on the standard practices of clean code and clean architecture in the development of ML, how to release better models and production environments, and how to improve your products' quality.
Differences Between Shipping Classic Software and Operating ML Models with a Lead MLOps Engineer at TMNL Simon Stiebellehner, and neptune.ai CEO Piotr Niedzwiedz
November 23, 2022 • 63 MIN
In this episode of MLOps Live, Sabine and Stephen are joined by Simon Stiebellehner, a Lead MLOps Engineer at TMNL (Transaction Monitoring Netherlands). Simon explains how DevOps engineers can transition to MLOps engineers, the approaches MLOps engineers use in creating an ML model, and how a vertical prototype is preferable to a horizontal prototype when test-running a model.
Classical software differs from MLOps due mostly to the model's non-deterministic characteristics. There are significant differences between the design of ML models and that of traditional software; these differences stem mostly from the limits imposed by the time and resources required to test and refine a model prototype before it is put into production. As a result, MLOps engineers will need to put in a lot of effort to overcome these obstacles.
Building Well-Architected Machine Learning Solutions on AWS with Phil Basford
November 9, 2022 • 54 MIN
In this episode of MLOps Live, Sabine and Stephen are joined by Phil Basford, CTO of AI and ML at Inawisdom.com. Phil talks about what it takes to architect a good ML solution. He goes into more detail, giving use cases for AWS solutions and optimizing resources to make ML, MLOps, and AI in general more agile. They further discuss ML insights for small teams and teams working on budget ML solutions.
Having dealt with numerous clients and businesses, Phil Basford talks about some of the worst ML moments and the lessons that professionals can learn from them. With Phil’s wealth of experience, he talks about safety concerns for teams using AWS, compares different ML systems, and speaks on how culture can affect the architectural and design process of an ML solution.
Solving the Model Serving Component of the MLOps Stack with Chaoyu Yang
October 26, 2022 • 55 MIN
In this episode of MLOps Live, Sabine and Stephen are joined by Chaoyu Yang, Co-Founder & CEO of BentoML. They discuss model deployment setups and trade-offs for reasonable scale teams, the prevalence of open-source platforms, and how to deploy the best model serving practices for early-stage and established teams.
Since MLOps is still in its infancy, it isn't easy to find established best practices and model deployment examples to operationalize machine learning solutions, as the latter can vary depending on factors such as the type of business use case, the size of the organization, the structure of the organization, and the availability of resources. Regardless of the machine learning model deployment pipeline, Chaoyu provides insights into the model deployment struggles experienced by various ML Engineers and their teams and the solutions they implemented to solve these model serving components.
How early-stage startups and small teams tackle MLOps with Duarte Carmo
October 12, 2022 • 54 MIN
In this episode of MLOps Live, Sabine and Stephen are joined by Duarte Carmo, Machine Learning Engineer at Amplemarket. Duarte draws on his extensive expertise in the field to offer advice on how to effectively execute MLOps within the constraints of limited manpower and resources of a team at a reasonable scale. He describes in-depth most of the critical problems that early-stage start-ups and small teams would encounter while deploying models and how to solve them.
Unlike large corporations whose methods have popularized the field, most ML teams are small and face distinct challenges. Most companies in the machine learning sector are now functioning at a reasonable scale, which this Q&A session caters to.
AutoML and MLOps with Adam Becker
September 28, 2022 • 54 MIN
In this episode of MLOps Live, Sabine and Stephen are joined by Adam Becker, Co-Founder at Telepath.io. Adam uses real-world examples from what he has observed teams succeed and fail at to tackle questions about AutoML and MLOps. Additionally, they discuss the path of machine learning technology development as well as the realignment of roles within the industry in the near future.
Machine learning has expanded from the realm of academic study to include practical business solutions. It is now necessary to increase both the process's maturity and its level of widespread adoption.
Adam sheds more light on the potential effects of the development of ML, the challenge of deploying ML use cases, and the concerns that data scientists may lose their jobs.
Embracing Responsible AI for ML Models in Production with Amber Roberts
September 14, 2022 • 51 MIN
In this episode of MLOps Live, Sabine and Stephen are joined by Amber Roberts, Machine Learning Engineer at Arize AI. They explore the best practices for implementing responsible AI for MLOps that prioritize accountability, fairness, and bias reduction. They further look at the role of observability and explainability in building responsible AI.
Machine learning (ML) applications are now widely used across businesses that want to integrate artificial intelligence (AI) capabilities, moving beyond the realms of academia and research. An increasing interest in the principles, techniques and best practices for using AI ethically and responsibly is growing along with the number of AI and ML solutions.
Amber addresses the diverse questions surrounding aspects of observability, interpretability, privacy, reliability, fairness, transparency, and accountability using her breadth and depth of domain knowledge. She further discusses best practices for monitoring AI applications using tools and procedures that will also be necessary.
Building an MLOps Culture in Your Team with Adam Sroka
August 31, 2022 • 53 MIN
In this episode of MLOps Live, Sabine and Stephen are joined by Adam Sroka, Head of Machine Learning Engineering at Origami. They explore principles and frameworks for creating a team culture in MLOps that prioritizes the most important things and sets the team up for success.
Collaboration between teams is necessary to move through the ML life cycle as rapidly and effectively as possible. Adam gives us an idea of what the MLOps culture is at Origami, how he built it, the challenges he encountered, and actions teams can begin taking to build a good MLOps culture. He identifies areas that businesses may capitalize on, including infrastructure, team structure, tools, project ownership, and KPIs for efficient workflow.
Adam has provided clear insights into the methods and tools he has used to build great teams with good quality MLOps culture built over his career. Adam is excited to share his technical and non-technical expertise, and shed some light on what works, especially now that best practices and playbooks for how to build a good MLOps culture and maximize the value from your projects and teams are not yet readily available.
Your First MLOps System: What Does Good Look Like? with Andy McMahon
August 17, 2022 • 58 MIN
In this episode of MLOps Live, Sabine and Stephen are joined by Andy McMahon, Machine Learning Engineering Lead of the NatWest Group. They explore concepts around building your first MLOps systems and how teams can understand the processes of optimizing level 0 operations and move towards scalability.
As soon as you commit a piece of code, a properly mature MLOps pipeline may be so powerful that it may be put into production immediately. However, attaining this level of maturity is extremely uncommon. Therefore, it becomes crucial to outline the requirements for creating a system that simplifies future operations, lowers failed deployments, and boosts performance.
The goal is to build an MLOps system that you can easily iterate on and would not break when the time for scale and integrating components (such as model registry and feature stores) arrive. Andy demonstrates how a basic model with an optimal MLOps infrastructure will yield value more quickly than a complex model that is thrown over the fence, which may result in resource wastage. Teams can begin by redefining the deliverable expectations, simplifying them to what is truly necessary, utilizing available tools, and constantly realigning operational considerations to the business problem to be solved.
Andy outlines several essential ideas, from the most basic level (MLOps at level 0), which includes no automation, to the most advanced one (MLOps level 1 and 2), which involves automating both machine learning and CI/CD pipelines.
Leveraging Unlabeled Image Data with Self-Supervised Learning or Pseudo Labeling with Mateusz Opala
August 3, 2022 • 46 MIN
In this episode of the MLOps Live podcast, Mateusz Opala, Senior Machine Learning Engineer at Brainly, answers questions about leveraging unlabeled image data with self-supervised learning or pseudo labeling.
Managing Computer Vision Projects with Michal Tadeusiak
July 20, 2022 • 53 MIN
In this episode of the MLOps Live podcast, Michal Tadeusiak, the Director of AI at deepsense.ai, will be answering questions about managing computer vision projects, specifically looking at AI activities spanning computer vision, NLP, and predictive modeling projects.
Data Engineering and MLOps for Neural Search with Fernando Rejon Barrera and Jakub Zavrel
July 6, 2022 • 51 MIN
Today, we’re joined by Fernando Rejon, Senior Infrastructure Engineer at Zeta Alpha Vector, and Jakub Zavrel, Founder and CEO of Zeta Alpha Vector. In addition, they discuss MLOps for neural search applications data engineering, and how this innovation is pushing the bounds of search engines.
In this episode, they explore how they use modern deep learning techniques to build an AI research navigator at Zeta Alpha. They engage in an in-depth discussion based on the challenges with setting up MLOps systems for neural search applications, how to evaluate the quality of embedding-based retrieval, progress and numerous pertinent criteria, contrasting the trade-offs of using in neural (information retrieval) search, and the trade-off with using it in practice and theory to standard information retrieval strategies.
Additionally, they put into perspective the most important components you would need to build a POC neural search application. examine neural search models in both the retrieval and ranking phases from the perspective of scalability and predictability. They also outline conditions under which state-of-the-art results can be obtained. They also discuss the enormous work necessary to build and deploy neural search applications, which necessitates the use of greater processing resources, such as GPUs rather than CPUs, to get desirable output.
Navigating ML Observability with Danny Leybzon
June 22, 2022 • 56 MIN
In this episode of MLOps Live, Sabine and Stephen are joined by Danny Leybzon, MLOps Architect at WhyLabs. They examine the differences between monitoring and observability in machine learning models for production and methods for efficient implementation and development.
Observability in MLOps is a holistic and comprehensive way to gain insights into the behavior, data, and performance of a machine learning model throughout its lifespan. It allows for detailed root cause analysis of ML model predictions and aids in the development of responsible models.
Although ML monitoring and observability appear to be similar, Danny points out that monitoring is a continuous system that prompts you when there is a problem. Whereas observability refers to the larger picture, a human-in-the-loop root cause analysis system that allows you to figure out what the problem is and then solve it.
Danny further discusses the unique features of WhyLabs in comparison to other conventional monitoring solutions, such as customizable and opinionated self-serve capabilities that allow users to pick particular metrics to track, especially in the absence of ground truth.
Testing Recommender Systems with Federico Bianchi
June 8, 2022 • 56 MIN
Today, we’re joined by Federico Bianchi, a Postdoctoral Researcher at Università Bocconi. He discusses testing recommender systems, the essential features for any platform with that purpose, testing the relevance of these systems, and how to handle the biases they generate.
With the continuous growth of e-commerce and online media in recent years, there are an increasing number of software-as-a-service recommender systems (RSs) accessible today. Users can get new content from recommender systems, which range from news articles (Google News, Yahoo News) to series and flicks (Netflix, Disney+, Prime Videos), and even products (Amazon, eBay). Today, there are so many products and information available on the internet that no single viewer can possibly see everything that is offered. This is where recommendations come in, allowing products and information to be classified according to their expected relevance to the user's preferences.
They compared offline recommendations to online evaluation platforms, which allow researchers to evaluate their systems in live, real-time scenarios with real people.
Federico discusses the benefits of offline modeling and evaluates the speed and convenience of testing algorithms with predetermined datasets. However, because these statistics are not tied to actual users, there are a lot of biases to consider.
Deploying models on GPU with Kyle Morris
May 25, 2022 • 56 MIN
In this episode of MLOps Live, Sabine and Stephen are joined by Kyle Morris, Co-Founder of Banana ML. They discuss running ML in production leveraging GPUs. They delve into GPU performance optimization, approaches, infrastructural and memory implications as well as other cases.
With the increased interest in building production-ready, end-to-end ML pipelines, there’s an increasing need to employ the optimal toolset, which can scale quicker. Modern commodity PCs have a multi-core CPU and at least one GPU, resulting in a low-cost, easily accessible heterogeneous environment for high-performance computing, but due to physical constraints, hardware development now results in greater parallelism rather than improved performance for sequential algorithms.
Machine Learning Build/Train and Production Execution frequently employ disparate controls, management, run time platforms, and sometimes languages. As a result, understanding the hardware on which one is running is critical in order to take advantage of any optimization that is feasible.
Building a Visual Search Engine with Kuba Cieslik
May 11, 2022 • 53 MIN
Today, we’re joined by Kuba Cieslik, CEO and Ai Engineer at tuul.ai. He has experience in building ML products and solutions and has a deep understanding of how to build visual search solutions.
Visual search technology has been around for quite some time, as part of Google Pictures or Pinterest Lens. It has become increasingly popular in e-commerce, allowing customers to simply upload what they're looking for instead of going through a slew of attribute filters. Kuba discusses how one might go about creating such a visual search engine from the ground up, as well as what approaches work and the challenges in such a complex sector.
MLOps at a Reasonable Scale: Approaches & Challenges with Jacopo Tagliabue
May 11, 2022 • 56 MIN
Today, we’re joined by Jacopo Tagliabue, Director of A.I. at Coveo. He currently combines product thinking and research-like curiosity to build better data-driven systems at scale. They examine how immature data pipelines are impeding a substantial part of industry practitioners from profiting from the latest ML research.
People from super-advanced, hyperscale companies come up with the majority of ideas for machine learning best practices and tools, examples are Big Tech companies like Google, Uber, and Airbnb, with sophisticated ML infrastructure to handle their petabytes of data. However, 98% of businesses aren't using machine learning in production at hyperscale but rather on a smaller (reasonable) scale.
Jacopo discusses how businesses may get started with machine learning at a modest size. Most of these organizations are early adopters of machine learning, and with their good sized proprietary datasets they can also reap the benefits of ML without requiring all of the super-advanced hyper-real-time infrastructure.