Ever made an IT mistake that stopped time?
We’re talking about that moment — when everything grinds to a halt, palms get sweaty, and you realize business can't go on until it's fixed. Whether it was a single click that took down the whole network or a "quick fix" that broke everything, we've all been there.
Got an epic story that froze the office in place? Share it with us!
The best (or worst?) stories will earn a special badge — because if you can't laugh about it later, did it even happen?
Share below!
Comments
-
A data center had a water cooling system leak. The data center was overheating at 2 a.m. and had to be shut down urgently. We were all able to come to the data center the next day at 3 p.m. to try to restart everything.
Our servers didn't restart right away; the heat had caused the plastic on the disk bays to shift. Fortunately, when we took the disks out and put them back in, everything worked again for us. I don't know about the thousands of other data center customers.
3 -
I trusted Datto backup services. had backups of old sbs2011 machines.
kaseya purchased datto.
decided to make a retroactive charge to my account that immediately put me on STOP service.
(we paid religiously - datto was great.)
kaseya took over and in order to align our billing in their system, created a pro rated charge that showed up on the statement 6 months before the day the made the change.
a charge we never saw. this charge basically put our account on stop service.fast forward a month and we get notification that clients sbs 2011 had crashed from our RMM - (now we are not idiots, we knew this day was coming and had done due diligence to ensure continuity even if it would take a few days.)
so we say, no worries, we pay a motza to keep this backed up we will go spin up the current backup. no problem.
try to access the service.
panic.
Call DATTOget told Kaseya now owns them
Put 2 and 2 together
rebuild the network around mitigations in place.client only down for 2 days instead of weeks.
the screw up? trusting a company like kaseya to do the right thing.
further to this, if atera ever sells and does not advise me, I will be pissed.1 -
Oh, I have also caused a production VM to die because of hyperv snapshot issue. sphincter clenching ensued
lost 24 hours for the client but backups saved the day. this one we all laugh about today.1 -
:0
That's a big chunk of time!!! @COOLNETAU0 -
I have a good one, it is about how we found out that another company brought a customers of ours network down.
This was about 25 years ago, we received a phone call to say that the network had gone down and no one could access the servers. I asked the usual questions to see if they had anyone like electricians or carpenters moving partitions who could have accidently cut the cables (this was a common occurrence at this particular customers site.
I duly went to site to investigate. They had a several BNC networks connected to a thickernet cable which went all around the offices and through the factory. The first thing I did was to go to one end of the thickernet cable and with my trusty multi meter I measued the resistance. It read 50 Ohm (it should be 25) so I knew there was a break in the cable some where.
I then followed the router of the cable popping ceiling tiles to see if I could see any breaks in the cable, I was on the 1st floor of the office block (Floor 2 for our American Techs), I went to walk through one of the doors, as I opened it I could not believe my eyes. There was nothing on the other side of the door, the entire building that it had to had gone. I peered through the door and looked to my right and I could see the broken end of the cable just hanging there.
I went back to the FCA and said I have located the fault come with me, I showed him the dangling cable. No one had told him that the attached building was getting demolished.
0

