Solving the Model Serving Component of the MLOps Stack with Chaoyu Yang
October 26, 2022
In this episode of MLOps Live, Sabine and Stephen are joined by Chaoyu Yang, Co-Founder & CEO of BentoML. They discuss model deployment setups and trade-offs for reasonable scale teams, the prevalence of open-source platforms, and how to deploy the best model serving practices for early-stage and established teams.
Since MLOps is still in its infancy, it isn't easy to find established best practices and model deployment examples to operationalize machine learning solutions, as the latter can vary depending on factors such as the type of business use case, the size of the organization, the structure of the organization, and the availability of resources. Regardless of the machine learning model deployment pipeline, Chaoyu provides insights into the model deployment struggles experienced by various ML Engineers and their teams and the solutions they implemented to solve these model serving components.
Subscribe to our YouTube
channel to watch this episode!
Learn more about Chaoyu Yang:
If you enjoyed this episode then please either:
Previous guests include: Andy McMahon of NatWest Group, Jacopo Tagliabue of Coveo, Adam Sroka of Origami, Amber Roberts of Arize AI, Michal Tadeusiak of deepsense.ai, Danny Leybzon of WhyLabs, Kyle Morris of Banana ML, Federico Bianchi of Università Bocconi, Mateusz Opala of Brainly, Kuba Cieslik of tuul.ai, Adam Becker of Telepath.io and Fernando Rejon & Jakub Zavrel of Zeta Alpha Vector.
Check out our three most downloaded episodes:
MLOps Live is handcrafted by our friends over at: fame.so