~/blog/operation-touchy-running-a-command-that-made-my-cpu-hate-me-because-i-cant-google
Published on

Operation 'Touchy' - Running a Command That Made My CPU Hate Me Because I Can't Google

669 words4 min read

Oh, look who's back! You must really like watching me mess things up. Let me tell you how I royally screwed the CPU, all because I couldn't be bothered to Google something before I decided to set my terminal on fire.

Yet another backstory

It all started when I moved to a new location for work. Due to some oh-so-dramatic circumstances, I had to leave my beloved PC behind. I'm the kind of person who keeps my system updated and running like a Swiss watch, so naturally, I freaked out. I convinced myself that leaving it idle would cause the poor thing to spontaneously combust or something. Then, the real panic set in when I remembered SSDs lose data over time if they aren't used.

I was right! SSDs do lose data if they aren't used for a long time. But get this: it's not a couple of weeks or months like I thought. Nope. Turns out it's years. Y E A R S. Who knew? Certainly not me, because, you know, I didn't bother Googling it.

Lucky for you, I didn't bother to look it up before deciding to write this article...

What can I do about it?

What's the most impractical, yet technically valid way to solve this? How can I ensure every SSD data cell is 'saved'? Surely just opening the files will keep them 'safe'. After all, I have a ton of files—mostly photos of food, you know, really essential stuff. So what if... just what if we could use bash? It's bash, after all—surely that'll work.

The monstrocity

Hear me out. Instead of opening each file manually like some kind of peasant, we can just read the contents of the files. This will simulate the files being used and—hopefully—'restore' the electric charges in the SSD flash cells. touch was the first candidate. But, plot twist: it updates important metadata. A huge flaw, right? Oh, and let's not even get started on the fact that it wouldn't actually 'read' the files like we want. Enter cat, the true hero of the story—because, of course, the most dramatic way to do this is by spitting out every file's contents to /dev/null.

I present to you an illegal solution in bash for a non-existent problem:

sudo find /home/ -type f -print0 | pv -s $(sudo find /home/ -type f -print0 | wc -c) | xargs -0 -P 0 -I {} cat {} > /dev/null

I'd encourage you to use explainshell to dive deeper into the command, but let's be real, who's got the time for that? Here's the gist: we grab the full filenames (with paths) of every file under /home/ and read them with cat. pv is there to keep you entertained with some flashy progress bar. And since no one wants to actually look at the millions of file contents, we toss them into the gaping abyss that is /dev/null. Oh, and setting -P 0 in xargs tells the shell to launch as many processes as it can, because why not?

No important file metadata updated. All files "read" and data blocks in the SSD are "saved".

Disclaimer

It's not an ideal solution by any stretch of the imagination. We're basically setting CPU cycles on fire here. This command will likely last a couple of minutes (it took me 30 minutes, but who's counting?), so don't even think about using your system for anything productive during that time. But hey, at least we've solved a problem that didn't even exist. Mission accomplished, right? :P