We changed our name from IT Central Station: Here's why

Dell PowerEdge M OverviewUNIXBusinessApplication

Dell PowerEdge M is #4 ranked solution in top Blade Servers. PeerSpot users give Dell PowerEdge M an average rating of 8 out of 10. Dell PowerEdge M is most commonly compared to HPE Synergy: Dell PowerEdge M vs HPE Synergy. The top industry researching this solution are professionals from a comms service provider, accounting for 32% of all views.
What is Dell PowerEdge M?

The Dell PowerEdge M-Series blade servers address the challenges of an evolving IT environment by delivering leading enterprise classfeatures and functionality. The M-Series delivers a unique array of options configured to meet the needs of your IT environment both now and in the future.

  • Multiple blade form-factor choices.
  • Long life cycle providing better life cycle management and improved TCO.
  • Modular I/O switches for future scalability.
  • Breakthrough fan technologies for reduction in power consumption.
  • Configuration management via chassis management controller.
  • Persistent MAC/WWN/iSCSI addresses in a switch-agnostic environment.
Buyer's Guide

Download the Blade Servers Buyer's Guide including reviews and more. Updated: January 2022

Dell PowerEdge M Customers

Newport City Homes, Neuroblastoma and Medulloblastoma Translational Research Consortium (NMTRC), Georgian College, AgreeYa Solutions, IIHT Cloud Solutions, Arizona State University, AudienceScience, University of the Incarnate Word (UIW), The Translational Genomics Research Institute (TGen), Holy Cross School

Dell PowerEdge M Video

Archived Dell PowerEdge M Reviews (more than two years old)

Filter by:
Filter Reviews
Industry
Loading...
Filter Unavailable
Company Size
Loading...
Filter Unavailable
Job Level
Loading...
Filter Unavailable
Rating
Loading...
Filter Unavailable
Considered
Loading...
Filter Unavailable
Order by:
Loading...
  • Date
  • Highest Rating
  • Lowest Rating
  • Review Length
Search:
Showingreviews based on the current filters. Reset all filters
it_user750378
Partners with 1-10 employees
Consultant
It reduces complexity and is easy to manage by an administrator
Pros and Cons
  • "It is easy to manage by an administrator and we can spend valuable time on other IT issues."
  • "More interconnections with third party equipment and limited I/O selection."

What is most valuable?

  • Faster deployment
  • Efficiency
  • Scalability
  • Reliability
  • Simplified management

How has it helped my organization?

It has reduced complexity. Therefore, it is easy to manage by an administrator and we can spend valuable time on other IT issues.

What needs improvement?

More interconnections with third party equipment and limited I/O selection.

For how long have I used the solution?

Two years.

What do I think about the stability of the solution?

No, we have not.

What do I think about the scalability of the solution?

No, we have not.

How are customer service and technical support?

Good and knowledgeable.

Which solution did I use previously and why did I switch?

Previously, HPE and IBM solutions, but Dell's are more budget friendly.

How was the initial setup?

It was budget friendly.

What's my experience with pricing, setup cost, and licensing?

You will get your value for your money.

Which other solutions did I evaluate?

Customer has good experience with Dell products, because they are budget friendly.

What other advice do I have?

Look for a good product and technology to go with. Something reliable and solves problems.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
it_user521352
Principal Architect - Virtual, Storage and Networking infrastructure at a tech services company with 51-200 employees
Consultant
It offers high density and low cost. It is easy to manage.

What is most valuable?

  • Density
  • Cost
  • Feature set
  • Manageability

How has it helped my organization?

The low cost and high density of the solution has allowed us to place more compute assets per data center rack and increase our virtual machine count. Our customers rent VMs and compute space from us via a traditional IaaS model. The more VMs and the more compute per rack, the better.

What needs improvement?

I would like to see a broader range of chassis networking options. Cisco was an option at one time and then not. I am not sure if they are again.

For how long have I used the solution?

I have used it for 3+ years.

What do I think about the stability of the solution?

I have not encountered any stability issues. Our only issues with the solution have revolved around firmware interoperability between networking firmware and storage array firmware.

What do I think about the scalability of the solution?

I have not encountered any scalability issues.

How are customer service and technical support?

Technical support is 10/10.

Which solution did I use previously and why did I switch?

We were using 1U rack mount servers. We switched for density reasons at the time. But now, today, 1U servers are able to accommodate larger amounts of memory and 22-core processors. Additional data center deployments have moved back to 1U designs.

How was the initial setup?

Initial setup was straightforward. The chassis is easy to use and manage. Easy-to-use interfaces allowed for rapid deployment.

What's my experience with pricing, setup cost, and licensing?

Blade chassis can be found on the used market super cheap. If a company already has the M1000e chassis and support under contract, I would advise buying additional chassis on the refurb and second-hand market. If that is not an option, Dell pricing starts high and ends up low after negotiation. They seem to have a large amount of room to move on the initial quoted pricing.

Which other solutions did I evaluate?

We looked at options from Cisco. Pricing was too high for 1/2 the density.

What other advice do I have?

It’s a solid solution. I recommend it. But in today’s always-on environments and virtual deployments in redundant designs, used hardware is not a bad option. We have quotes on the table for fully populated chassis with 16 blade servers and 4 MXL switches for around $40,000. Compared to about $9k for a single new blade. Given that today’s blades are higher density with newer processors, 1-2 year-old equipment is still a valid solution for our needs.

Disclosure: My company has a business relationship with this vendor other than being a customer: My company is a Dell Partner and it resells hardware to its customer base.
Find out what your peers are saying about Dell EMC, Hewlett Packard Enterprise, Super Micro and others in Blade Servers. Updated: January 2022.
563,208 professionals have used our research since 2012.
ITCS user
Senior Engineer at a tech company with 1,001-5,000 employees
Vendor
I can fit 40 logical cores and 400GB of RAM into one half-height blade. That allows me to achieve my desired density in a cloud environment.

What is most valuable?

Fitting 40 logical cores and 400GB of RAM into one half-height blade allows me to achieve the density in a cloud environment I aim for. What's more is it handles workloads of 25 VMs and more without any noticeable penalty performance-wise. Despite having 20 physical cores running into two sockets on one board, the vCPU-ready times remain good throughout, while averaging 4 X vCPU for each physical core per blade.

How has it helped my organization?

The density achieved by the M630 while maintaining good performance allowed us to cut our hardware footprint by more than half. This makes for a huge cost saving, not only for hardware to maintain and rack space rental costs, but also reduces the required overall hypervisor licensing, which doesn't come cheap.

What needs improvement?

For a half-height blade, Dell did well with this one, not leaving much room for improvement. The ever-increasing boot time due to ever-increasing POST checks with every new generation makes maintenance tasks somewhat of a headache. That and the occasional chip creep causes one to have to re-seat DIMMs from time to time. Other than that, not much can be said in terms of cons.

If Dell can reduce the chip creep specially on the RAM modules that would e great. This does not occur that often but often enough to try and improve on. Each blade having each own iDRAC instance makes sense for connecting to a single blade on an ad-hoc base but becomes pain when having to access the local console for all blades in a chassis one after the other.

Each connection requires downloading the java aplet from the blade and jumping though a few hoops before it runs. When closing the aplet and one needs to connect to that same blade again the entire process needs to be repeated again. Performing this same task on an IBM blade chassis is a breeze. The CMC provides one with a drop down list listing all slots in the chassis. This allows one switch between local console mode for all blades in the chassis without any repetitive aplet downloads or even having to configure a management IP address for each blade.

Even though this is more of a blade chassis feature than a blade feature Dell would do well in stealing this page from IBM's book.

For how long have I used the solution?

I have been using it for almost one year.

What do I think about the stability of the solution?

These units have proven very stable. Even with component failure, in most cases, the unit keeps running, allowing one to migrate workloads off of it first, before attending to any break\fixes.

What do I think about the scalability of the solution?

This solution has proven very scalable.

How are customer service and technical support?

Technical support generally is acceptable. Having to jump through hoops can sometime be frustrating. Especially when your firmware has to be on the latest version before further assistance will be given. Even though I understand the logic behind it, keeping an entire environment on the latest version is nearly impossible at the rate firmware, drivers and other patches are released.

Which solution did I use previously and why did I switch?

I have used IBM blade chassis before. The change to Dell at the time was more a financial one - one that I now fully support from a technical perspective.

How was the initial setup?

Setting up the solution is fairly straightforward. If you have any experience on rack-mounts historically, it should all be familiar. Some networking and storage zoning is to be expected as well, though.

What's my experience with pricing, setup cost, and licensing?

The price tag on the M630 can easily be compared to that of most beefy rack mounts out there. And that with very little internal storage. If the aim is raw processing power in a dense and scalable solution with shared storage available, little out there will rival it. Keeping an eye on the exchange rate can make for some huge cost saving, as well, especially when looking to purchase a fully populated 16-blade chassis.

Which other solutions did I evaluate?

Having used predecessors like the M610 and M620 over the years, refreshing with M630s was the obvious choice.

What other advice do I have?

When increasing the core count of the CPU when requesting a quote, be wary of the big jumps in price going from an 8-core CPU to 10-core or more etc. Dell doesn't give a price per component breakdown on their quotes. Getting two quotes for the same blade where only the CPUs differ will clearly indicate the huge price they associate per core.

Disclosure: I am a real user, and this review is based on my own experience and opinions.