• 3 Posts
  • 136 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle
  • browsers did vertical tabs in the 90s and it flopped

    There are extensions for that. Which are worse than they used to be because they didn’t provide APIs enabling to do that properly, about 10 fucking years after they dropped the old APIs. There are a lot of other feature requests from back then open, often even filed years before they went through with dropping the old APIs. The best way of doing custom keyboard shortcuts in Firefox is still injecting Javascript into each page, with all the shortcomings this has. Usability of Firefox is way worse nowadays than what it was 10 years ago - and I do understand (and agree) with the decision to dump the legacy APIs, but you can’t just break functionality lots of people use, and not provide APIs in over a decade to fix that.

    I’m trying other browsers now and then, but every single one is a dumpster fire. At least the Firefox dumpster fire is a bit less out of control - but that’s the most positive thing I can say about it nowadays.





  • Unless you are gunning for a job in infrastructure you don’t need to go into kubernetes or terraform or anything like that,

    Even then knowing when not to use k8s or similar things is often more valuable than having deep knowledge of those - a lot of stuff where I see k8s or similar stuff used doesn’t have the uptime requirements to warrant the complexity. If I have something that just should be up during working hours, and have reliable monitoring plus the ability to re-deploy it via ansible within 10 minutes if it goes poof maybe putting a few additional layers that can blow up in between isn’t the best idea.



  • aard@kyu.detoProgrammer Humor@programming.devOld timers know
    link
    fedilink
    arrow-up
    3
    arrow-down
    2
    ·
    7 months ago

    Shitty companies did it like that back then - and shitty companies still don’t properly utilize what easy tools they have available for controlled deployment nowayads. So nothing really changed, just that the amount of people (and with that, amount of morons) skyrocketed.

    I had automated builds out of CVS with deployment to staging, and option to deploy to production after tests over 15 years ago.


  • Meanwhile over in Europe - went to the doctor in spring as a cough didn’t go away for ages. As suspected nothing he could do much - irritated throat, and just at the time when cold season was giving way for allergy season. So he prescribed some nose spray - and asked if he should also add some antihistamine to the prescription to save me a few eur (didn’t check, but it probably is single digits. That stuff is cheap)




  • Not entirely sure about that. I have a bunch of systems with the current 8cx, and that’s pretty much 10 years behind Apple performance wise, while being similar in heat and power consumed. It is perfectly fine for the average office and webbrowsing workload, though - a 10 year old mobile i7 still is an acceptable CPU for that nowadays, the more problematic areas of IO speed are better with the Snapdragon. (That’s also the reason why Apple is getting away with that 8GB thing - the performance impact caused by that still keeps a usable system for the average user. The lie is not that it doesn’t work - the lie is that it doesn’t have an impact).

    From the articles I see about the Snapdragon Elite it seems to have something like double the multicore performance of the 8cx - which is a nice improvement, but still quite a bit away from catching up to the Apple chips. You could have a large percentage of office workers use them and be happy - but for demanding workloads you’d still need to go intel/AMD/Apple. I don’t think many companies will go for Windows/Arm when they can’t really switch everybody over. Plus, the deployment tools for ARM are not very stable yet - and big parts of what you’d need for doing deployments in an organization have just been available for ARM for a few months now (I’ve been waiting for that, but didn’t have a time to evaluate if they’re working).


  • It also is perfectly fine for running a few minute long compile cycles - without running into thermal throttling. I guess if you do some hour long stuff it might eventually become an issue - but generally the CPUs available in the Airs seem to be perfectly fine with passive cooling even for longer peak loads. Definitely usable as a developer machine, though, if you can live with the low memory (16GB for the M1, which I have).

    I bought some Apple hardware for a customer project - which was pretty much first time seriously touching Apple stuff since the 90s, as i’m not much of a friend of them - and was pretty surprised about performance as well as lack of heat. That thing is now running Linux, and it made me replace my aging Thinkpad x230 with a Macbook Pro - where active cooling clearly is required, but you also get a lot of performance out of it.

    The real big thing is that they managed to scale power usage nicely over the complete load range. For the Max/Ultra variants you get comparable performance (and power draw/heat) on high load to the top Ryzen mobile CPUs - but for low load you still get a responsive system at significantly less power draw than the Ryzens.

    Intel is playing a completely different game - they did manage to catch up a bit, but generally are still running hot, and are power hogs. Currently it’s just a race between Apple and AMD - and AMD is gimped by nobody building proper notebooks with their CPUs. Prices Apple is charging for RAM and SSDs are insane, though - they do get additional performance out of their design (unlike pretty much all x86 notebooks, where soldered RAM will offer the same throughput as a socketed on), but having a M.2 slot for a lower speed extra SSD would be very welcome.


  • It has been a while since I touched ssmtp, so take what I’m saying with a grain of salt.

    Problem with ssmtp and related when I was testing it was its behaviour in error conditions - due to a lack of any kind of spool it doesn’t fail very gracefully, and if the sending software doesn’t expect it and implement a spool itself (which it typically doesn’t have a reason to, as pretty much the only situation where something like sendmail would fail is a situation where it also wouldn’t be able to write a spool) this can very easily lead to loss of mails.

    I already had a working SMTP client capable of fishing mails out of a Maildir at that point, so I ended up just doing a simple sendmail program throwing whatever it receives into a Maildir, and a cronjob to send this forward. This might be the most minimalistic setup for reliably sending out mail (and I’m using it an all my computers behind Emacs to do so) - but it is badly documented, so if you don’t care about reliability postfix might be a better choice, or if you don’t just go with ssmtp or similar. Or if you do want to dig into that message me, and I’ll help making things more user friendly.