About VESSL AI
VESSL - Modern workflow for Machine Learning
- VESSL AI’s mission is to accelerate machine learning transformation with a modern development workflow.
- The company provides an end-to-end MLOps platform that enables machine learning teams to build, train, and deploy models at scale without the complex and repetitive process of setting up pipelines and infrastructures. VESSL AI brings scalability, traceability, and reproducibility – principles that have been widely adopted in modern software development with the onset of DevOps – to machine learning projects.
- VESSL AI CEO Jaeman An, COO Dami Yi, and previously worked together at a medical AI startup as VP of Engineering and Chief of Staff. There, they firsthand witnessed the challenges of deploying machine learning models into production and noticed that this was a shared frustration across industries. They consulted Yongseon Lee and Jihwan Chun, their former colleagues who were then working at Sendbird and Google. The four were convinced that the need for MLOps will only grow as more companies scale up their AI efforts and founded the company in April 2020.
- VESSL AI launched the closed beta for its end-to-end MLOps platform in March 2021 and counts leading research institutions and computer vision companies in South Korea as its customers, notably the Graduate School of AI at KAIST (Korea Advanced Institute of Science and Technology). Machine learning teams across multiple industries, spanning autonomous vehicles, fundamental physics, biomedicine, manufacturing, and gaming are on the waitlist.
- The company raised USD $4.4 million in seed funding. The financing round was led by KB Investment and Mirae Asset Venture Investment, with participation from A Ventures, and Spring Camp. This comes just over one year after the company raised $400K in pre-seed round.
- Headquartered in Seoul, Korea and San Mateo, CA, VESSL AI has a staff of 10+ who previously worked at Google, Sendbird, and PUBG.
What VESSL AI offers
- Experiment Tracking — shared dashboard for tracking experiments and interactive visualization of model performances
- Advanced training modes — support for automated hyperparameter optimization and multi-node distributed training
- Dataset Version-control — central repository for versioned datasets with data snapshots
- Custom Workspace — customizable environment for Jupyter Notebooks with SSH connection
- Resource provisioning — support for resource allocation and automated resource provisioning on hybrid clusters
- Model Registry — a central repository for versioned models that can be reproduced or deployed with a single command
What sets VESSL AI apart
- In addition to a centralized dashboard and workflow management, VESSL AI offers (1) an enterprise-grade infrastructure equipped with automated resource provisioning, and (2) powerful training modes with automated hyperparameter optimization and multi-node distributed training. These advanced features remove overheads involved in developing state-of-the-art models at scale.
- VESSL AI also comes with industry-leading cloud optimization and fully hosted on-premise deployments which allow its enterprise customers to save up to 80% of their cloud spending. For machine learning teams that rely on resource-heavy computations, the cost of cloud is becoming a burden and many are moving away from cloud-only to hybrid or on-premise servers. In addition to cost savings, teams using VESSL AI can streamline their workflow on a single, unified platform regardless of the development environment.
- Building an MLOps platform requires expertise in both machine learning and DevOps. VESSL AI consists of top DevOps and machine learning engineers who previously worked on Google Cloud Platform, Sendbird Cloud Infrastructure, and PUBG Data Platform.
What's next for VESSL AI