I feel lucky to have avoided this so far. It’s really not like this on my team. I write a fair bit of code and review a ton of code.
I feel lucky to have avoided this so far. It’s really not like this on my team. I write a fair bit of code and review a ton of code.
I think it was the EPA’s National Compute Center. I’m guessing based on location though.
When I was in highschool we toured the local EPA office. They had the most data I’ve ever seen accessible in person. Im going to guess how much.
It was a dome with a robot arm that spun around and grabbed tapes. It was 2000 so I’m guessing 100gb per tape. But my memory on the shape of the tapes isn’t good.
Looks like tapes were four inches tall. Let’s found up to six inches for housing and easier math. The dome was taller than me. Let’s go with 14 shelves.
Let’s guess a six foot shelf diameter. So, like 20 feet circumference. Tapes were maybe .8 inches a pop. With space between for robot fingers and stuff, let’s guess 240 tapes per shelf.
That comes out to about 300 terabytes. Oh. That isn’t that much these days. I mean, it’s a lot. But these days you could easily get that in spinning disks. No robot arm seek time. But with modern hardware it’d be 60 petabytes.
I’m not sure how you’d transfer it these days. A truck, presumably. But you’d probably want to transfer a copy rather than disassemble it. That sounds slow too.
Not looking at the man page, but I expect you can limit it if you want and the parser for the parameter knows about these names. If it were me it’d be one parser for byte size values and it’d work for chunk size and limit and sync interval and whatever else dd does.
Also probably limited by the size of the number tracking. I think dd reports the number of bytes copied at the end even in unlimited mode.
I used gerrit and zuul a while back at a place that really didn’t want to use GitHub. It worked pretty well but it took a lot of care and maintenance to keep it all ticking along for a bunch of us.
It has a few features I loved that GitHub took years to catch up to. Not sure there’s a moral to this story.
When someone is having a computer problem I ask them to restart first. Not because I think they don’t know to do it, but just in case. Some people don’t know. Sometimes people forget. Obvious advice is useful sometimes.
We knew spooks were all up in the phone network. They’d show up and ask installers to run them some cables and configure ports in a certain way. I was friends with folks who were friends with the installers.
I work on software for finding things and summarizing stuff. We were one of those Apache 2 -> other relicenses a while back.
I can’t really talk about specifics. But we all have a working imagination though. I think about it a lot. But I still do the job. There are good folks doing good things with it.
I’m a pretty good engineer. Not the best I’ve ever met by a long shot, but I’m good. But I’m very outgoing for an engineer.
Ironically, that’d describe both my parents too.
I had this one weekend when I was in tenth grade where I did nothing but write code on a fun project. Then I decided I didn’t like writing code. I don’t know why. Kids are weird.
I decided then I couldn’t make it my job. I managed not to program for three years. It turns out I’m bad at everything else. Miserable.
That was 22 years ago. That’s still all I’m good at.
Pre-merge code review should stop that kind of thing. I honestly haven’t seen anything like this in years.
Chrono Trigger. The Magus Fight. The music.
FF6. Magitech Factory. Also music.
Metal Gear Solid. Psycho Mantis. Late at night. Tired.
Eternal Sonata. Last Fight. Intro line.
Hades. Final boss. Extreme measures 4.
NES Tetris. Crashing.
Windows -> RedHat -> Windows -> Gentoo -> Ubuntu -> RHEL -> Ubuntu -> Debian -> Arch
Do folks still use logstash here? Filebeat and ES gets you pretty far. I’ve never been deep in ops land though.
I put googly eyes on things.
I’ve been using a sonicare for years now. I think it was expensive but it’s lasted forever and does a great job.
Try your local library.