Overview

At its core, Sermos is a framework that is based on a heavily tested tech stack, purpose built for running complex, scalable, highly available Machine Learning (ML) workloads.

Sermos comes pre-baked with a tremendous number of design decisions, tooling, and services for quickly running ML and document-based workloads. Some key features include:

  • Workers & Scheduling
    • Robust, scalable worker system on top of Celery and RabbitMQ.

    • Flexible scheduling system to invoke tasks on any schedule.

  • Pipelines
    • Dynamic “pipelines” based on a simple yaml configuration file.

    • Allows for simple linear workflows and complex directed acyclic graphs (DAGs) with no additional effort.

  • Tools (Sermos Tools)
    • Sermos Tools provides tools that take care of mundane and common tasks in many ML and Natural Language Processing (NLP), Internet of Things (IoT), and other workloads.

  • API Endpoints and Documentation

    • Proper API design (data marshalling, throttling, security, etc.) comes out of the box with very few ‘requirements’ on customer code.

    • Automatic API Documentation comes out of the box.

    • Common API endpoints including Pipeline invocation, training data collection and retrieval, and metrics.

  • Databases (Sermos Cloud)

    • Highly available implementations of major databases including Postgres, Elasticsearch, and Redis.

  • Admin Console (Sermos Cloud)

    • Create API consumers, issue API keys, create task schedules, and configure pipelines all from your Sermos Cloud Console.