Did pretty much the same with a new server recently - spent ages debugging why it didn’t find the SAS disks. Turns out, disks like to have power connected, and no amount of debugging on software level will help you.
Did pretty much the same with a new server recently - spent ages debugging why it didn’t find the SAS disks. Turns out, disks like to have power connected, and no amount of debugging on software level will help you.
You still had a 4GB memory limit for processes, as well as a total memory limit of 64GB. Especially the first one was a problem for Java apps before AMD introduced 64bit extensions and a reason to use Sun servers for that.
I was referring to work setups with the overengineering - if I had a cent for every time I had to argue with somebody at work to not make things more complex than we actually need I’d have retired a long time ago.
Unless you are gunning for a job in infrastructure you don’t need to go into kubernetes or terraform or anything like that,
Even then knowing when not to use k8s or similar things is often more valuable than having deep knowledge of those - a lot of stuff where I see k8s or similar stuff used doesn’t have the uptime requirements to warrant the complexity. If I have something that just should be up during working hours, and have reliable monitoring plus the ability to re-deploy it via ansible within 10 minutes if it goes poof maybe putting a few additional layers that can blow up in between isn’t the best idea.
Everything is deployed via ansible - including nameservices. So I already have the description of my infra in ansible, and rest is just a matter of writing scripts to pull it in a more readable form, and maybe add a few comment labels that also get extracted for easily forgettable admin URLs.
Shitty companies did it like that back then - and shitty companies still don’t properly utilize what easy tools they have available for controlled deployment nowayads. So nothing really changed, just that the amount of people (and with that, amount of morons) skyrocketed.
I had automated builds out of CVS with deployment to staging, and option to deploy to production after tests over 15 years ago.
Meanwhile over in Europe - went to the doctor in spring as a cough didn’t go away for ages. As suspected nothing he could do much - irritated throat, and just at the time when cold season was giving way for allergy season. So he prescribed some nose spray - and asked if he should also add some antihistamine to the prescription to save me a few eur (didn’t check, but it probably is single digits. That stuff is cheap)
Nowadays it matters if you use a compression algorithm that can utilize multiple cores for packing/unpacking larger data. For a multiple GB archive that can be the difference between “I’ll grab a coffee until this is ready” or “I’ll go for lunch and hope it is done when I come back”
As a non-Windows-user I see that as a good thing. LLMs are not going away - but that kind of nonsense at least will make sure all PCs will eventually have cheap and reasonably fast AI acceleration. Which is required for killing off centrally hosted LLMs (plus nvidias cash grabbing)
Not entirely sure about that. I have a bunch of systems with the current 8cx, and that’s pretty much 10 years behind Apple performance wise, while being similar in heat and power consumed. It is perfectly fine for the average office and webbrowsing workload, though - a 10 year old mobile i7 still is an acceptable CPU for that nowadays, the more problematic areas of IO speed are better with the Snapdragon. (That’s also the reason why Apple is getting away with that 8GB thing - the performance impact caused by that still keeps a usable system for the average user. The lie is not that it doesn’t work - the lie is that it doesn’t have an impact).
From the articles I see about the Snapdragon Elite it seems to have something like double the multicore performance of the 8cx - which is a nice improvement, but still quite a bit away from catching up to the Apple chips. You could have a large percentage of office workers use them and be happy - but for demanding workloads you’d still need to go intel/AMD/Apple. I don’t think many companies will go for Windows/Arm when they can’t really switch everybody over. Plus, the deployment tools for ARM are not very stable yet - and big parts of what you’d need for doing deployments in an organization have just been available for ARM for a few months now (I’ve been waiting for that, but didn’t have a time to evaluate if they’re working).
It also is perfectly fine for running a few minute long compile cycles - without running into thermal throttling. I guess if you do some hour long stuff it might eventually become an issue - but generally the CPUs available in the Airs seem to be perfectly fine with passive cooling even for longer peak loads. Definitely usable as a developer machine, though, if you can live with the low memory (16GB for the M1, which I have).
I bought some Apple hardware for a customer project - which was pretty much first time seriously touching Apple stuff since the 90s, as i’m not much of a friend of them - and was pretty surprised about performance as well as lack of heat. That thing is now running Linux, and it made me replace my aging Thinkpad x230 with a Macbook Pro - where active cooling clearly is required, but you also get a lot of performance out of it.
The real big thing is that they managed to scale power usage nicely over the complete load range. For the Max/Ultra variants you get comparable performance (and power draw/heat) on high load to the top Ryzen mobile CPUs - but for low load you still get a responsive system at significantly less power draw than the Ryzens.
Intel is playing a completely different game - they did manage to catch up a bit, but generally are still running hot, and are power hogs. Currently it’s just a race between Apple and AMD - and AMD is gimped by nobody building proper notebooks with their CPUs. Prices Apple is charging for RAM and SSDs are insane, though - they do get additional performance out of their design (unlike pretty much all x86 notebooks, where soldered RAM will offer the same throughput as a socketed on), but having a M.2 slot for a lower speed extra SSD would be very welcome.
It has been a while since I touched ssmtp, so take what I’m saying with a grain of salt.
Problem with ssmtp and related when I was testing it was its behaviour in error conditions - due to a lack of any kind of spool it doesn’t fail very gracefully, and if the sending software doesn’t expect it and implement a spool itself (which it typically doesn’t have a reason to, as pretty much the only situation where something like sendmail would fail is a situation where it also wouldn’t be able to write a spool) this can very easily lead to loss of mails.
I already had a working SMTP client capable of fishing mails out of a Maildir at that point, so I ended up just doing a simple sendmail program throwing whatever it receives into a Maildir, and a cronjob to send this forward. This might be the most minimalistic setup for reliably sending out mail (and I’m using it an all my computers behind Emacs to do so) - but it is badly documented, so if you don’t care about reliability postfix might be a better choice, or if you don’t just go with ssmtp or similar. Or if you do want to dig into that message me, and I’ll help making things more user friendly.
Because it does JBOD if the controller supports it. Pretty much none of the controllers you’ll find in consumer hardware support that.
JBOD relies on an optional SATA extension, which most of your controllers won’t have.
That leaves you with RAID in the controller - which is a bad idea, as you don’t have much control over what is going on, and recovery if it fails will possibly messy.
I nowadays typically have three outcomes to similare situations:
As they just want it temporarily lubed water based lubricants from the sex shop might be a better option. They don’t leave much residue, and are tested for compatibility with various rubbers.
Here in Finland Fazer fills real egg shells with chocolate for easter, with the 4-pack also sold in egg cartons.
They’ve shown over and over again over the last years that they’re happy to push arbitraty lies for whatever reason. NY Times is probably one of the few newspapers where replacing the journalists with chatgtp would increase quality and factual accuracy.
Bosch has a bunch that are quite useful for sanding in corners: https://www.boschtools.com/us/en/sanding-polishing-43817-ocs-ac/
There are extensions for that. Which are worse than they used to be because they didn’t provide APIs enabling to do that properly, about 10 fucking years after they dropped the old APIs. There are a lot of other feature requests from back then open, often even filed years before they went through with dropping the old APIs. The best way of doing custom keyboard shortcuts in Firefox is still injecting Javascript into each page, with all the shortcomings this has. Usability of Firefox is way worse nowadays than what it was 10 years ago - and I do understand (and agree) with the decision to dump the legacy APIs, but you can’t just break functionality lots of people use, and not provide APIs in over a decade to fix that.
I’m trying other browsers now and then, but every single one is a dumpster fire. At least the Firefox dumpster fire is a bit less out of control - but that’s the most positive thing I can say about it nowadays.