• 0 Posts
  • 43 Comments
Joined 1 year ago
cake
Cake day: July 30th, 2023

help-circle

  • computergeek125@lemmy.worldto196@lemmy.blahaj.zoneRule
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 months ago

    I’ve been that guy at a computer store. Had already found what I needed on my own since I was quite familiar with the store and was browsing a different isle to look at the shinies. Overheard a customer ask a salesperson what the difference between product x and y was, which were marked very similarly on the box but one was something like 30-50% more.

    I noticed the salesperson become quite unsure of what this specific technical difference was, so I added the quick TLDR paragraph of what the generalized difference was and what words the manufacturers use to differentiate them (since there were several product pairs that matched both classes elsewhere on the isle).

    Customer says “oh ok that makes sense”. I forget which one he decided on (I think it might have been the more expensive one kek), but the salesperson put his commission tracking sticker on the selected box and the customer wandered away, hopefully happy. Salesperson turns to me sheepishly “Um… I guess you probably don’t need help?” I responded “No, I’m just browsing, but do you want to put your sticker on this gizmo I found in the bargain bin over there?” He seemed happy with this arrangement, adds the commission sticker, and we part ways.

    …did I inadvertently make a pact with a different type of fae?










  • As others said, depends on your use case. There are lots of good discussions here about mirroring vs single disks, different vendors, etc. Some backup systems may want you to have a large filesystem available that would not be otherwise attainable without a RAID 5/6.

    Enterprise backups tend to fall along the recommendation called 3-2-1:

    • 3 copies of the data, of which
    • 2 are backups, and
    • 1 is off-site (and preferably offline)

    On my home system, I have 3-2-0 for most data and 4-3-0 for my most important virtual machines. My home system doesn’t have an off-site, but I do have two external hard drives connected to my NAS.

    • All devices are backed up to the NAS for fast recovery access between 1w and 24h RPO
    • The NAS backs up various parts of itself to the external hard drives every 24h
      • Data is split up by role and convenience factor - just putting stuff together like Tetris pieces, spreading out the NAS between the two drives
      • The most critical data for me to have first during a recovery is backed up to BOTH external disks
    • Coincidentally, both drives happen to be from different vendors, but I didn’t initially plan it that way, the Seagate drive was a gift and the WD drive was on sale

    Story time

    I had one of my two backup drives fail a few months ago. Literally actually nothing of value was lost, just went down to the electronics shop and bought a bigger drive from the same vendor (preserving the one on each vendor approach). Reformatted the disk, recreated the backup job, then ran the first transfer. Pretty much not a big deal, all the data was still in 2 other places - the source itself, and the NAS primary array.

    The most important thing to determine about a backup when you plan one - think about how much the data is valuable to you. That’s how much you might be willing to spend on keeping that data safe.



  • Running nextcloud (non docker version) and I don’t see near so many client updates - usually once every few weeks, which would be a reasonable expected pace. Server updates are less frequent.

    On Windows (all of my primary devices), I just install the NC client update and skip the explorer restart, pending full reboot later. Tis the nature of literally anything that deeply integrates with Explorer. I’ve seen explorer “death” during updates from several vendors that have similar explorer plugins, not just NC. Explorer sometimes just decides to nope out even without NC updating.

    Now on one device I hadn’t opened for a while, I saw NC run two updates in a row, but that was my fault for procrastinating the first one.

    Here’s the desktop release history: https://github.com/nextcloud/desktop/releases
    I don’t see a “one every day” within the block of time between Dec 6 and today, unless you had the release candidate builds which may have been more frequent in a few spots.




  • In the IT world, we just call that a server. The usual golden rule for backups is 3-2-1:

    • 3 copies of the data total, of which
    • 2 are backups (not the primary access), and
    • 1 of the backups is off-site.

    So, if the data is only server side, it’s just data. If the data is only client side, it’s just data. But if the data is fully replicated on both sides, now you have a backup.

    There’s a related adage regarding backups: “if there’s two copies of the data, you effectively have one. If there’s only one copy of the data, you can never guarantee it’s there”. Basically, it means you should always assume one copy somewhere will fail and you will be left with n-1 copies. In your example, if your server failed or got ransomwared, you wouldn’t have a complete dataset since the local computer doesn’t have a full replica.

    I recently had a a backup drive fail on me, and all I had to do was just buy a new one. No data loss, I just regenerated the backup as soon as the drive was spun up. I’ve also had to restore entire servers that have failed. Minimal data loss since the last backup, but nothing I couldn’t rebuild.

    Edit: I’m not saying what your asking for is wrong or bad, I’m just saying “backup” isn’t the right word to ask about. It’ll muddy some of the answers as to what you’re really looking for.



  • Oh I am in fact giving the giant auto complete function little credit. But just like any computer system, an AI can reflect the biases of it’s creators and dataset. Similarly, the computer can only give an answer to the question it has been asked.

    Dataset wise, we don’t know exactly what the bot was trained on, other than “a lot”. I would like to hope it’s creators acted in good judgement, but as creators/maintainers of the AI, there may be an inherent (even if unintentional) bias towards the creation and adoption of AI. Just like how some speech recognition models have issues with some dialects or image recognition has issues with some skin tones - both based on the datasets they ingested.

    The question itself invites at least some bias and only asks for benefits. I work in IT, and I see this situation all the time with the questions some people have in tickets: the question will be “how do I do x”, and while x is a perfectly reasonable thing for someone to want to do, it’s not really the final answer. As reasoning humans, we can also take the context of a question to provide additional details without blindly reciting information from the first few lmgtfy results.

    (Stop reading here if you don’t want a ramble)


    AI is growing yes and it’s getting better, but it’s still a very immature field. Many of its beneficial cases have serious drawbacks that mean it should NOT be “given full control of a starship”, so to speak.

    • Driverless cars still need very good markings on the road to stay in lane, but a human has better pattern matching to find lanes - even in a snow drift.
    • Research queries are especially affected, with chatbots hallucinating references that don’t exist despite being formatted correctly. To that specifically:
      • Two lawyers have been caught separately using chatbots for research and submitting their work without validating the answer. They were caught because they cited a case which supported their arguments but did not exist.
      • A chatbot trained to operate as a customer support representative invented a refund policy that did not exist. As decided by small claims court, the airline was forced to honor this policy
      • In an online forum while trying to determine if a piece of software had a specific functionality, I encountered a user who had copied the question into chatgpt and pasted the response. It was a command option that was exactly what I and the forum poster needed, but sadly did not exist. On further research, there was a bug report open for a few years to add this functionality that was not yet implemented
      • A coworker asked an LLM if a specific Windows powershell commands existed. It responded with documentation about a very nicely formatted command that was exactly what we needed, but alas did not exist. It had to be told that it was wrong four times before it gave us an answer that worked.

    While OP’s question is about the benefits, I think it’s also important to talk about the drawbacks at the same time. All that information could be inadvertently filtered out. Would you blindly trust the health of you child or significant other to a chatbot that may or may not be hallucinating? Would you want your boss to fire you because the computer determined your recorded task time to resolution was low? What about all those dozens of people you helped in side chats that don’t have tickets?

    There’s a great saying about not letting progress get in the way of perfection, meaning that we shouldn’t get too caught on getting the last 10-20% of completion. But with decision making that can affect peoples’ lives and livelihoods, we need to be damn sure the computer is going to make the right decision every time or not trust it to have full controls at all.

    As the future currently stands, we still need humans constantly auditing the decisions of our computers (both standard procedural and AI) for safely’s sake. All of those examples above could have been solved by a trained human gating the result. In the powershell case, my coworker was that person. If we’re trusting the computers with at much decision making as that Bing answer proposes, the AI models need to be MUCH better trained at how to do their jobs than they currently are. Am I saying we should stop using and researching AI? No, but not enough people currently understand that these tools have incredibly rough edges and the ability for a human to verify answers is absolutely critical.

    Lastly, are humans biased? Yes absolutely. You can probably see my own bias in the construction of this answer.



  • I don’t have an immediate answer for you on encryption. I know most of the communication is encrypted in flight for AD, and on disk passwords are stored hashed unless the “use reversible encryption field is checked”. There are (in Microsoft terms) gMSAs (group-managed service accounts) but other than using one for ADFS (their oath provider), I have little knowledge of how it actually works on the inside.

    AD also provides encryption key backup services for Bitlocker (MS full-partition encryption for NTFS) and the local account manager I mentioned, LAPS. Recovering those keys requires either a global admin account or specific permission delegation. On disk, I know MS has an encryption provider that works with the TPM, but I don’t have any data about whether that system is used (or where the decryptor is located) for these accounts types with recoverable credentials.

    I did read a story recently about a cyber security firm working with an org who had gotten their way all the way down to domain admin, but needed a biometric unlocked Bitwarden to pop the final backup server to “own” the org. They indicated that there was native windows encryption going on, and managed to break in using a now-patched vulnerability in Bitwarden to recover a decryption key achievable by resetting the domain admin’s password and doing some windows magic. On my DC at home, all I know is it doesn’t need my password to reboot so there’s credentials recovery somewhere.

    Directly to your question about short term use passwords: I’m not sure there’s a way to do it out of the box in MS AD without getting into some overcomplicated process. Accounts themselves can have per-OU password expiration policies that are nanosecond accurate (I know because I once accidentally set a password policy to 365 nanoseconds instead of a year), and you can even set whole account expiry (which would prevent the user from unlocking their expired password with a changed one). Theoretically, you could design/find a system that interacts with your domain to set, impound/encrypt, and manage the account and password expiration of a given set of users, but that would likely be add on software.