custom background image

H200 GPU instances


Accelerate your AI projects with H200 GPU instances

High-performance for training, inference, and the most demanding data workloads in a reliable and transparent European cloud.

Why choose NVIDIA H200 GPUs?

Powerful

Up to 1.4 × faster than the H100 for training and inference of GenAI models.

High performance.

141 GB of ultra-fast HBM3e memory: 2 × more memory bandwidth, ideal for large models.

compatible

H100 compatible: leverage your existing frameworks and optimisations without complex migration.

Sovereign

Available in our Public Cloud, ensuring flexibility, transparency, and European compliance.

Optimised for your AI and data workloads

Large-scale LLM

Train and deploy models with up to 175B parameters (GPT-3, Llama 3, Falcon 180B) thanks to its 141GB of HBM3e memory and a bandwidth of 4.8TB/s.

Advanced generative AI

Generate text, images, audio, and video with stable response times, even in long contexts.

Extended context and RAG

Enhance your AI assistants and chatbots with long context windows.

Specifications

Technical specifications

GPU

1, or 8 GPUs per instance

GPU memory

141GB of ultra-fast HBM3 per GPU

High-performance storage

Local NVMe passthrough on most instances

Public and private network

Up to 25 Gbps included

Automation

Management via your customer space, API, OVHcloud CLI…

Secure and Private

ISO27001, SOC certifications, health data hosting…

Maximise your ROI with flexible GPU infrastructure

Transparent pricing

Pay only for the resources you use, with no hidden fees. You maintain control of your costs while enjoying optimal performance.

Instant scalability

Scale up or down your GPU resources on demand, in just a few clicks. Easily adapt your capacity to your AI and data workloads.

Sovereignty and compliance

Your data is hosted on a certified European cloud, ensuring security, transparency, and compliance with regulations (GDPR, ISO, HDS).

Barrier-free accessibility

H200 GPUs accessible to everyone: from proof of concept to production deployment, with no volume commitment or hardware constraints.

How to choose your GPU for inference?

Compact models

With up to 7 B of parameters, a A100 offers an excellent performance-price ratio.

Large models

From 65 B+ or extended context windows, the H200 provides the memory bandwidth needed for stable response times.

Your questions answered

What service level agreement (SLA) is guaranteed by OVHcloud on a GPU instance?

The service level agreement (SLA) is 99.99% monthly availability on GPU instances. For further information, please refer to the General Terms of Service.

Which hypervisor is used for instance virtualisation?

Just like other instances, GPU instances are virtualised by the KVM hypervisor in the Linux kernel.

What is PCI Passthrough?

Cards with GPUs are served via the physical server’s PCI bus. PCI Passthrough is a hypervisor feature that allows you to dedicate hardware to a virtual machine by giving direct access to the PCI bus, without going through virtualisation.

Can I resize a Cloud GPU instance?

Yes, Cloud GPU instances can be upgraded to a higher model after a reboot. However, they cannot be downgraded to a lower model.

Do GPU instances have anti-DDoS protection?

Yes, our anti-DDoS protection is included with all OVHcloud solutions at no extra cost.

Can I switch to hourly billing from an instance that is billed monthly?

If you have monthly billing set up, you cannot switch to hourly billing. Before you launch an instance, please take care to select the billing method that is best suited to your project.

What is a Cloud GPU?

A Cloud GPU is a cloud computing service that provides graphic processing units (GPUs) for tasks that require high computing power. Examples of these tasks are graphic rendering, machine learning, data analysis, and scientific simulations. Unlike on-premises GPUs, which require a significant investment in hardware, cloud GPUs are more flexible and easier to scale. Users can access high-performance computing resources on demand, and only pay for what they use.

What is an H100 and A100 server?

Servers that are equipped with NVIDIA H100 and A100 GPUs are purpose-built to offer exceptional performance in HPC, AI, and data analytics.

What is NGC?

NVIDIA GPU Cloud (NGC) is a cloud computing platform offered by NVIDIA. It provides a comprehensive selection of software that is optimised for GPU acceleration in artificial intelligence (AI), machine learning (ML), and high-performance computing (HPC). NGC simplifies and speeds up the deployment of AI and scientific computing applications. It does this by providing containers, pre-trained models, SDKs, and other tools that are optimised to leverage NVIDIA GPUs.

Why use a Cloud GPU?

There are several advantages to using a Cloud GPU, especially for companies, research teams, and development teams in demanding areas such as artificial intelligence (AI), graphics rendering, machine learning (ML), and high-performance computing (HPC).