Most miners negotiate hard on electricity rate. They’ll spend weeks trying to shave $0.002/kWh off a hosting deal, then sign with a facility running at 92% uptime and wonder why their returns don’t match the spreadsheet.
The math is unforgiving.
What uptime actually costs at scale
Take 1 PH/s of modern hardware (roughly 1,000 units of recent-gen ASICs). At current network conditions and a hashprice of $0.055/TH/day, that deployment earns around $20,000 per year.
The difference between 95% and 99% uptime is 4 percentage points, or about 14.6 days of lost production annually. At $55/day for that deployment, you lose roughly $800 per year. Per petahash. On a 10 PH/s operation, that is $8,000 in avoidable losses.
Now compare that to a $0.001/kWh improvement in electricity rate. On a 100-watt-per-TH machine running 10 PH/s, that saves you $0.10/day per TH, or $365/year per PH/s. Better. But a 4-point uptime improvement beats it by more than 2:1.
And critically: the downtime loss scales with hashprice. When hashprice spikes, every hour offline costs more. Bad facilities tend to have outages precisely during peak load periods, when grid demand is highest and your machines should be running hardest.
The electricity rate fixation
The obsession with power rate comes from how hosting deals are marketed. Every facility leads with $/kWh. It is the first number in every term sheet.
Uptime percentage, if listed at all, appears buried in the SLA section. Sometimes it is defined in ways that minimize accountability: “uptime measured from ticket submission,” “scheduled maintenance excluded,” “force majeure events not counted.”
A well-structured SLA defines uptime from the moment power or cooling is lost, includes all outages above a threshold (say, 15 minutes), and issues downtime credits automatically. Most contracts do not do this.
How to ask for uptime data
Before signing, request the trailing 12-month uptime data for the specific facility and rack type. Aggregated company-wide numbers hide location variance. A facility with two buildings might run 99% in one and 94% in the other.
Push for:
- Monthly uptime logs (not averages)
- Longest single outage duration in the past year
- Whether they have backup generators and how fast they kick in
- Grid source: hydro and gas tend to have fewer micro-interruptions than solar or wind-heavy grids
If a hosting company will not provide trailing data, that is your answer.
The difficulty timing problem
Downtime is worst when difficulty is falling.
When difficulty drops, each machine earns more per block found. A machine offline during a 5% difficulty decline misses amplified earnings compared to the same duration of downtime during a flat period. Facilities with better grid stability keep machines online through demand-response events and grid instability, which tend to cluster around the same macro conditions that drive difficulty shifts.
Response time matters too
Even with good infrastructure, machines go down. What separates facilities is how fast they respond.
Ask: what is your average response time when a miner goes offline? Top-tier operations have remote reboot capability, 24/7 NOC coverage, and SLAs that commit to a response within 2 hours. Mid-tier facilities might have next-day on-site response. At the scale of hundreds or thousands of units, a 12-hour difference in response time can mean 10 to 20 percent of a production day lost per incident.
Practical takeaway
Electricity rate is a marketing number. Uptime is a performance number. The better facilities know this, which is why they publish actual uptime data rather than just a rate card.
Before you sign: request the trailing 12-month uptime logs, understand how the SLA defines and credits downtime, and check generator backup specs. A 3 to 4 point improvement in uptime is worth more than most electricity rate negotiations you will have.
Stop optimizing for the number on the brochure.





