Bare-Metal Servers for High-Performance Workloads
These high-performance bare-metal servers are specifically configured to deliver the best possible performance for your workloads. Used alone or as part of a cluster, a range of high-density and low-latency hardware configurations can be used for machine learning, grid computing, in-memory databases, or artificial intelligence applications. Explore the different use cases and customise your servers to suit your goals and requirements.
High-availability servers for critical-usage clusters
Redundant architecture for electrical circuits, water-cooling, network and power.
Designed to support resource-intensive production environments.
Connect your clusters to OVHcloud Public Cloud or OVHcloud Hosted Private Cloud.
Choose and customise your servers to build your clusters.
Choose your servers
Servers optimised for extremely fast disk access and minimal latency
Some specific applications require a high-performance storage in terms of IOPS (I/O per second). The HG IOPS Intensive server is designed for mass analysis tasks, digital simulation projects, and very high definition video applications. It is also an ideal foundation on which to build, power and maintain a high-performance e-commerce site. Thanks to their low latency and extreme speed, the NVMe SSDs that this model is equipped with will be more than able to meet your requirements, being, on average, six times more efficient than SSDs in SATA.
High-density servers, designed for massive data processing
Used alone or as part of a cluster, the high-density HG server, configured for big data and analytics, will allow you to effectively manage your dynamic workloads. It is suitable for both intensive computing (HPC) applications and data analysis solutions. This model, specifically designed for big data, is compatible with the most common data processing platforms, such as Hadoop, SQL and NoSQL databases. It also facilitates the management of databases, such as Apache Cassandra, Microsoft SQL Server, and MongoDB.
Servers for AI and machine learning
Designed for processing the parallel calculations required for automatic learning, the HG AI and machine learning configurations are equipped with the latest generation of high-performance components. These configurations make it possible to unleash the full potential of the Nvidia Tesla P100 processors – the most widely used accelerator in the world for parallel intensive computing. Take advantage of all the deep learning frameworks accelerated by the Tesla P100, and more than 450 HPC applications.
Servers optimised to run in-memory databases
Data is usually stored on a hard disk or SSD and then transferred to RAM when it is used. These numerous disk accesses can slow down servers, especially if the RAM is not large enough. The HG In-Memory Database configuration is the ideal solution if you are looking for a server that is optimised to support an In-Memory Database Management System (In-Memory DBMS). This system is used to improve the performance of requests and applications that access data as they are stored in memory.
Since 2003, Irontec has delivered the assurance that your infrastructures and applications are in good hands. The most award-winning OVHcloud partner in recent years. OpenAwards to the best European provider of open technological solutions.
Grupo Trevenque helps companies to use technology to improve their processes and business models. With more than 25 years of experience, we bring software and cloud solutions to our customers.
In a constantly changing and increasingly connected world, Thales stands by those with great ambitions: to put digital technology at the service of a better and more secure world. To ensure that we can benefit from new technologies with confidence, Thales supports and secures the transformation of information systems and the most critical solutions and protects the entire data lifecycle, from its creation to its exploitation.
Designed for high availability and fault tolerance, these servers rely on OVHcloud's vRack private network to ensure smooth exchanges between machines (up to 10Gbit/s) within a secure VLAN. They are therefore ideal for building a data lake, while interconnecting it with OVHcloud Public Cloud solutions.
For AI and machine learning, use frameworks such as Caffe2, Cognitive toolkit, PyTorch, TensorFlow and many others. These are based on libraries accelerated by the Nvidia Tesla P100, such as cuDNN and NCCL, which specialise in delivering optimal learning performance.
Fast disk access and minimal latency
With scores of up to 587,000 IOPS in read and 184,000 IOPS in write, you can accelerate NoSQL databases, search engines, data warehouses, real-time analysis, and disk-based caching applications.
All of the different hard disks and SSDs (SAS, SATA, NVMe) can be replaced without any need to reboot your server. As a result, if you need to increase your server's disk capacity you will not experience any service interruptions.
Different levels of support for your organisation
What is cluster computing?
Cluster computing is simply the practice of linking multiple computers or servers together, in order to maximise performance and availability. The key difference between cloud computing and cluster computing, is that cluster computing is based on linking together physical solutions, rather than virtual ones. However, OVHcloud allows you to link your server clusters at any of our datacentres to any of your cloud solutions, utilising the vRack’s secure private connections.
There are numerous potential uses for such architectures, including balancing workloads between servers, to maximise consistency, ensuring redundancy, in case of hardware failure, or delivering the very highest level of performance, for especially intensive applications.
Regardless of the specific use case, hardware is a key consideration for any sort of high-performance cluster computing solution. As these solutions are typically utilised for the most demanding workloads, all components (including processors and GPUs) must be of the highest standard, and all connections between servers must be both fast and secure. Customisability is key in this regard, as any servers utilised as part of high-performance clusters must offer the freedom to tailor the hardware for specific use cases, and be fully compatible with all the most widely-used data processing platforms.
Why does cluster computing matter for your organisation?
With more and more growing organisations demanding the highest level of performance for their most intensive workloads, and often exploring the possibilities offered by big data, AI and machine learning, the raw power and unparalleled flexibility offered by cluster computing has become increasingly valuable, for a wide range of use cases.
However, setting up, managing and maintaining a cluster computing architecture in-house, can be extremely expensive and time-consuming, regardless of an organisation’s current level of internal expertise. This is especially challenging when fast deployment is essential, to accommodate a specific workload or an anticipated traffic peak.
A preconfigured cluster at any of OVHcloud world-class datacentres eliminates these concerns, while still providing you with the flexibility, security and control that an on-premises solution would offer. You have the freedom to configure your high-performance servers to suit your workloads, after which your cluster will be deployed and configured as quickly as possible by our in-house experts.
This way, your teams can focus on your projects and ongoing growth, with complete autonomy to manage your clusters via the OVHcloud Control Panel, while you enjoy the peace of mind that comes from knowing your hardware is being managed and maintained by our datacentre experts.