As COVID-19 cases continue to mount, Maxihost has put a comprehensive plan in place to ensure customers aren't affected. We are working diligently and responding to a rapidly evolving situation to make sure we continue to provide the highest level of uptime and resiliency, keeping our commitment to customers.
Today, we would like to provide more visibility into our strategy, providing details about our plans and how we execute on them while maintaining the health and safety of our employees.
Infrastructure resilience and reliability
Our service is designed with a high degree of redundancy and fail-over capabilities to reduce the likelihood of impact.
Maxihost handles thousands of servers globally, delivering 99.9%+ uptime across different data centers. We have employees or partners close to every data center location we offer, which gives us flexibility if supply chains are disrupted in an extended crisis.
Work from home
Maxihost employees and contractors are able to (and frequently do) work from anywhere. We have the resources and tools to do our jobs securely from any location. Out of an abundance of caution during COVID-19, Maxihost employees regardless of office location are being encouraged to do their work remotely.
All business travel and visits have been suspended, and onsite interviews with candidates are being conducted over video conference.
We understand these are very challenging times, and we are doing everything we can so customers are not impacted. Don't hesitate to reach out to us if you have questions about our Business Continuity plans.
Edit March 17th: We haven't ruled out the possibility of increased lead time on restock events, but our supply chain is still performing as expected and vendors are optimistic.
Edit March 19th: Despite most employees already being working from home for a week now, as a precautionary and proactive measure to reduce exposure to COVID-19 and to keep our employees and communities safe, we've made our work from home policy mandatory.
Edit May 28th: As businesses start opening up again across the country, employees will be gradually invited to come back to the office starting June 1st. Starting today, all employees of all positions will be given the possibility of working from home for as long as they want.
Edit June 11th: All employees are now free to work from the office at their discretion. We won't be making mandatory requests and those who want to continue working from home are free to do so. We don't expect to change our policies in the near future, so this will be our last update to this post.
Most Enterprises have some sort of multicloud environment on their stack, simply because different clouds are better at certain things than others.
Running RDS on AWS, an Active Directory cluster on Azure, and a handful of Bare Metals on Maxihost is enough to make even seasoned System Administrators sweat.
Amplified by an ever-growing and often unnecessary complexity that shows up during application development and deployment, cloud environments have become so different from company to company that managing them is a challenge particular to each organization.
In order to simplify the management of these workloads, we're excited to announce our integration with Mist.io.
Mist is a multicloud management platform that helps you manage the most generic abstraction of your clouds, servers, and VMs, while also providing integration with other services like Kubernetes.
By using Maxihost with Mist you can deploy and manage your bare metal servers, and do the same for all the other clouds that your company or team uses.
We're excited to be partnering up with the talented team at Mist so that our customers will finally be able to bring their virtualized clouds and bare metal servers together on a single control panel.
Getting started is as easy as adding your API token to Mist's dashboard, here's how.
PS: Maxihost users who sign up for Mist in February, will get three months of free usage.
Measuring and understanding your traffic is an important part for sizing your infrastructure and keeping costs under control. Today, we're excited to launch new bandwidth graphs. They are much faster and easier to read, and were built from the ground up to help you make more sense of your traffic.
These are the most important metrics to help you understand what went on during the selected period. The Max value is the max speed hit by the server during the period, and Current is the speed at the present time.
Preselected period and custom ranges
It can be useful to look at different periods to see how your traffic behaves. You now have a wider range of predefined date periods to quickly navigate your graphs, and you can also select custom periods.
The new graphs are available for all accounts—just go to your server's bandwidth page. If you have feedback or feature requests, click on the Feedback option under the Support icon. We're looking forward for your ideas on what we should tackle next.
Prior to today, when you needed to access your server remotely through IPMI, you had to contact our team, who would then create a VPN session and share temporary IPMI credentials that you could use. Since that is not ideal when you need to change things quickly on your infrastructure, we have been working hard to increase the speed and safety of all of this processes.
Today, we are proud to announce the next step in enhancing the management of your servers from the dashboard and API: Maxihost Remote Access.
When you go to the dashboard and select a server, you will now find the Remote Access option.
Clicking on it will take you to a page where you will set up a VPN session and obtain the credentials to gain access to the IPMI instantly.
How we're making the process safer
IPMI is notoriously insecure, but it is still the best way to manage a server when you lose SSH access or need to make changes to server operation.
In order to protect you and your servers, we have built an incredibly safe way to access your servers remotely. Here are some of the measures we took to protect you:
IPMI IP addresses are always private. We require that you create a VPN session before getting the IPMI credentials.
When you create a VPN session, we make sure that session only has access to the IPs that exist in your account, meaning you cannot access anything from a private network that is not assigned to you.
VPN sessions and passwords expire after 24 hours. We do not store passwords anywhere.
Every time you request your IPMI credentials, we change the IPMI password. Again, we do not store IPMI passwords anywhere.
When providing remote access for Bare Metal, providers usually have to compromise on something, either removing IPMI access and only providing a console, or simply disabling everything. We are glad we have found a solution that keeps your servers safe while giving you full control of the underlying infrastructure.
If you are interested in learning more, see the documentation. You will find more information about how to use Remote Access.
Also, don't hesitate to share your feedback with the team who built this. Send an email to the Engineering team.
Security and Compliance are one of the largest barriers when adopting the cloud according to this recent report from Accenture. An important part of dealing with security is making sure that the right people have access to the right resources.
Today we're pleased to announce Role-Based Access Control for the Maxihost dashboard.
With RBAC you can easily add and remove users, assign one of four predefined roles, and limit access to your account based on what people are allowed to do within your account.
You can select from three predefined roles:
Full Permission: Users with the Full Permission role have view and edit rights to all account information and settings. The Full Permission role can add users, create and manage all server settings, and even delete the entire account.
Collaborator: Collaborators have access and edit rights to all the information on Servers and Networking. Collaborators can create new Servers, add their SSH Key, request Additional IPs, and do all of the other server management actions, including deleting servers. They cannot view any account-related details, such as billing information or add team members.
Billing: Users with the Billing role are only able to view and edit billing information and view and pay invoices. They can't see or modify servers or services on your account.
API Access: You can set additional permission for users for API access. This is useful if you want to build an integration with a secondary user with limited Dashboard access.
Learn more about adding users and using RBAC here.
Maxihost started as a simple web hosting provider in late 2001. We then went on to build our own data center in 2015, which enabled us to work with some of the world’s most interesting and innovative companies as they distributed their workloads in Latin America.
Today we're taking the next step in our journey.
We’re launching a big update to our product offering, along with a new logo and website. This is more than a coat of paint on our brand—it’s the result of long and profound collaboration between our in-house marketing team and James Mikrut and the great team from Keen Studio, who worked to create a new, more cohesive visual identity for Maxihost.
The company is growing and changing at a rapid pace. We’re no longer a niche service provider. We’re building IT infrastructure products that make the internet faster for companies at the forefront of their industries.
This has resulted in changes to our products and the need for the Maxihost team to turn its complete focus to the platform that is enabling this change.
Incremental adjustments to the old website weren’t an option, as much of it didn’t reflect the Maxihost we are now. From our messaging to product features, we knew we needed to start from scratch.
It’s still us, but more consistent and, we hope, more instantly recognizable.
We hope you enjoy it.
It’s good to know that your Bare Metal is being deployed instantly after you requested it, but it’s much better when you know what’s happening during that process. Starting today, you can!
There are a few different stages we’ll show you during the deployment process.
Awaiting will tell you which servers have open invoices that need to be paid before being provisioned. For users in the hourly billing beta, we skip this stage as payment is due only at the end of the billing cycle.
Deploying means our systems are hard at work running additional hardware tests, installing the Operating System you selected and making sure everything is good to go.
When the device is ready to be used, we’ll show you a New stage. That’ll persist for 72 hours to make it easier for you and your team to know which servers have been deployed recently.
Give it a try
We hope this update makes it easier to keep track of your deployments. Let us know what you think!
Thanks again for being a Maxihost customer.
We’re extremely excited to announce that the API that powers Control, Maxihost’s web interface, is now finally public! Head to https://developers.maxihost.com for documentation and a guide to getting started.
This has been a huge project that took us 6 months to complete. We wanted to build an easy to use and extremely resilient API that responds accordingly when you need it.
A great example of that is a queue system for server provisioning. When requesting deployment of a bare metal that’s not in stock, you can set the backorder parameter to true so the request is added to a queue. When a server with the requested specs comes back in stock, we automatically provision it.
Some technical details
Pagination is implemented following RFC5988’s convention of `Link` headers to provide URLs for the `next` page. It also provides links and meta attributes in responses if you rather not use headers.
Traffic is served on a separate domain to help shield from CSRF and session vulnerabilities, and to aid with monitoring, routing, and throttling.
Integration help for partners
For existing and new companies wanting to partner with Maxihost and integrate their software with ours, please reach out to us and we’ll be happy to discuss collaborating on an integration.
If you find any issues in the documentation or want a specific example added to it, please ping us on [email protected].
We’re excited to see what our customers will build and how they’ll use the API to better manage their infrastructure.
Containers consist in a runtime (configuration between hardware and software) that containerizes an application and all its dependencies, such as libraries, configuration files and other binaries in a single package, known as an image.
When the image of an application is created, differences between the distribution of the O.S. (Operating System) and other layers of the infrastructure are abstracted, solving one of the biggest problems of how to run a software: How to make an application work reliably in different environments.
What problems can Containers solve?
Delivery time: Containers can be created and deleted within seconds, meaning that they can be instantiated “just-in-time” since it is not necessary to initialize a whole O.S. for each new container.
Portability: Containers isolate services of an application. With that, it’s possible to move your app freely between environments, even when the server has a different Operating System.
Configuration: Changes can be made individually in each container automatically without the need of rebuilding the entire application. Because containers are lightweight, they can be instantiated when needed and are available almost immediately.
Differences between Containers and Virtual Machines
There are many differences between Containers and VMs, here are the most important:
The architecture of containers and virtual machines are different in terms of the Operating System in the sense that containers are hosted on a server with a single O.S. (the host O.S.) shared among them.
Virtual machines, on the other hand, have the host O.S. of the physical server where they are, and a guest O.S. on each of the VMs. The O.S. guest is independent of the host O.S. and might be different from one to another.
In practical terms, containers are most commonly used when you want to run applications in the same kernel. However, if you have applications or services that need to run on different Operating System distributions, VMs are usually required.
The sharing of the host’s O.S. between containers make them become very light, which reduces boot time. Because of this, the overhead (amount of physical resources required on the server) to manage a container system is much smaller when compared to VMs.
Because the host kernel is shared between containers, container technology has access to the kernel’s subsystems. As a result, a vulnerability in the application can compromise the entire host server. Because of this, giving root access to applications is not recommended.
On the other hand, VMs are unique instances with their own kernel and security settings. They can, therefore, run applications that need higher permissions.
Each image in a container is a standalone package that runs an application or part of it. As a separate guest O.S. is not required, this image can be moved between different platforms.
Containers can be started or stopped in a matter of seconds when compared to VMs due to their lightweight architecture. This makes it easier and faster to deploy containers to servers.
VMs, on the other hand, are isolated instances running their own Operating System. They can not be moved between platforms without a careful migration process being done.
For the purposes of developing the application or service where applications must be developed and tested in different environments, containers are the best option.
Containers are significantly lighter than VMs, thus requiring fewer resources.
As a result, container’s boot is much faster, since virtual machines need to load an entire Operating System to be initialized.
Another major difference is that the use of features like CPU, memory, I/O, etc., vary depending on the load or traffic on it. Unlike VMs, it is not necessary to allocate permanent resources in a container.
Because of this, it is possible to say that this technology is way more scalable.
Containers are considered the evolution of VMs and are being adopted by companies of all sizes.
Its flexibility and lower resource requirements make it a more complete choice when it comes to deploying and managing your applications.
Despite being a technology less mature than conventional virtualization, it has developed rapidly and is already the standard choice for the workloads of large companies such as Google and Walmart.
Bare Metal Cloud is a term that we’ll be hearing more and more in the coming years. As an alternative to conventional clouds, Bare Metal Cloud platforms have been growing and presenting themselves as a great alternative to virtualized environments by solving many of the problems that virtual machines deal with.
What is Bare Metal Cloud?
Bare Metal Cloud is a public cloud for physical servers, where machines can be provisioned and managed with simplicity and speed similar to virtual machines. It’s like a mix between the performance benefits found in physical servers, with the flexibility and scalability of VMs.
Those who opt for Bare Metal Cloud get access to all of the power of physical servers and the flexibility of virtualized servers, such as elastic storage capacity, on-demand network configurations, and other services they need. It is an option that offers high flexibility without giving up performance.
Dedicated Servers on a Bare Metal Cloud platforms have similar features to virtual machines and can be created and deleted in only a few minutes either through a dashboard or API, something that’s standard for any existing cloud.
In addition, they enable access to a number of common cloud tools such as O.S (Operating System) reinstallation, access to graphics and statistics, elastic storage, cloud-init scripts, security tools, and more.
Bare Metal Cloud vs Cloud
Because it is a physical machine, Bare Metal Cloud servers have the advantage of not having a hypervisor, which is a layer of software between the hardware and the operating system, responsible for virtualizing the infrastructure by segmenting physical machines in multiple VMs (Virtual Machines).
On a conventional cloud, some of the resources are consumed by the Hypervisor so it can run the virtualization layer. Because of that, you need more hardware resources to run an application in virtualized environments (called multi-tenant) than in dedicated environments (called single-tenant). For that reason, the cost of running VMs is often higher than running dedicated servers.
Also, because of the fact that you’ll be sharing resources with a bunch of other VMs, the performance of your own VM can be affected by the so called “noisy neighbor”, which consists of a user that makes excessive use of the server’s resources, impacting the security and stability of the entire virtual infrastructure. In that sense, Bare Metal Cloud platforms have a competitive advantage over the conventional cloud.
In sum, when choosing between physical or virtual servers, you first need to assess the demands of your applications and needs of your company. From this assessment you will have a better understanding on which of the options will meet your needs best.