Automatic model optimization

With just a few lines of code you can easily run a hyperparameter optimization of a model. We handle all the plumbing behind the scenes to help you quickly find optimal model architecture, features, and more.

Get StartedContact Sales

Build, evaluate, profile, and compare models. Experiments structure your machine learning projects with automatic versioning, tagging, and life-cycle management. Run single node, distributed training, and hyperparameter sweeps. Choose from any hardware types including GPUs, CPUs, and even TPUs.

01

Install the Paperspace CLI or use GradientCI to create a project.

02

Submit a hyperparameter experiment using the CLI or through a Git commit with GradientCI.

03

Find the best performing model in your console. You can then deploy the model, share it, or continue to experiment.

# Login from the CLI
$ paperspace login
-
# Create a hyperparameter experiment
$ paperspace login

Run on any DL and ML Framework. Bring your own container or choose from wide selection of pre-configured templates complete with popular drivers and dependencies like CUDA and cuDNN.

gradient-hyperparam-example.py
from paperspace-sdk import hyper_tune
from  .model import train_model
hparam_def = {
       'dense_len': 8 + hp.randint('dense_len', 120),
       'dropout': hp.uniform('dropout', 0., 0.8),
       'activation': hp.choice('activation', ['relu', 'tanh', 'sigmoid'])
}hyper_tune(train_model, hparam_def,algo=tpe.suggest, max_evals=25)

Try Gradient

Ready to get started? Get in touch or create a free account.