Why is it taking so much space in compressed form? I think text compresses very well so you should be able to save tons of space compared to db tables
Why is it taking so much space in compressed form? I think text compresses very well so you should be able to save tons of space compared to db tables
I started because I wanted to get around censorship in my country. I also wanted to view stuff in the original language and here we dub everything.
Can someone explain to me how this movie stand in the whole godzilla franchise?
Suppose I am a guy who only watched the original godzilla movies from 85
yeah, I would redownload all of those instead of transcoding. They are all available with very good encodes publicly
Are those your own blurays? Then share them before compressing.
Transcoding is hard. There is no way that your transcoding settings are going to be a one size fits all. I am currently encoding the famous iKaos Dragonball release and I did 48 samples before deciding what configuration to use.
You are better off downloading stuff from torrent, especially for newer media. You’ll find a community that put 100x your time collectively on transcoding. That will also save from your tremendous electricity costs.
Also look into vmaf for quality metrics. Consider that switching to uncompressed 1080 might bring you close to your goal with very very low effort.
Btw, can you share the title list?
It is unrealiatic, that in a stable software release there is suddenly, after you tested your backup a hard bug which prevents recovery.
How is unrealistic? Think of this:
Going unmaintained is a non issue, since you can still restore from your backup. It is not like a subscription or proprietary software which is no longer usable when you stop to pay for it or the company owning goes down.
Until they hit a hard bug or don’t support newer transport formats or scenarios. Also the community dries up eventually
As long as you understand that simply syncing files does not protect against accidental or malicious data loss like incremental backups do.
Can you show me a scenario? I don’t understand how incremental backups cover malicious data loss cases
how does this look safer for rsync? For me it looks like the risk for that is similar, but I might not know background of development for these.
Rsync is available out of the box in most linux distro and is used widely not only for backups, but a lot of other things, such as repository updates and transfers from file hosts. This means a lot more people are interested in it. Also the implementation, looking at the source code, is cleaner and easier to understand.
how do you deal with it when just a file changes?
I think you should consider that not all files are equal. Rsync for me is great because I end up with a bunch of disks that contain an exact copy of the files I have on my own server. Those files don’t change frequently, they are movies, pictures, songs and so on.
Other files such as code, configuration, files on my smartphone, etc… are backup up differently. I use git for most stuff that fits its model, syncthing for my temporary folders and my mobile phone.
Not every file can suit the same backup model. I trust that files that get corrupted or lost are in my weekly rsync backup. A configuration file I messed up two minutes ago is on git.
what other people are saying, is that you rsync over an encrypted file system or other type of storages. What are your backup targets? in my case I own the disks so I use LUKS partition -> ext4 -> mergerfs to end up with a single volume I can mount on a folder
nope, can you resend?
I am simple man s I use rsync.
Setup a mergerfs drive pool of about 60 TiB and rsync weekly.
Rsync seems daunting at first but then you realize how powerful and most importantly reliable it is.
It’s important that you try to restore your backups from time to time.
One of the main reasons why I avoid softwares such as Kopia or Borg or Restic or whatever is in fashion:
could you link the article?
Can you give me more informations on Near and cyberbullying?
Fellow italian pirate here, using Gentoo for servers and laptop since 2014. Very interesting, thank you for sharing. Would love to have a chat someday
152 GB in my drives
qbit manage
that is an interesting advice. Regarding containers, they don’t fit my use case.
Your question is so generic that it is difficult to reply. I’ll tell you about my use case then so that you can try to figure out yours.
My goal is to be a respectful citizen. I divide my torrents in three categories:
I bought tons of space (recently converted to three drives, 20tb each) and use a virtual machine locked behind a vpn. Even if I forget to paid, the virtual machine is bind to the tunnel so that traffic doesn’t go out except for LAN, so no leaks.
The VM has two torrent client:
I tend to leave everything in transmission seeded forever, the stuff in qbittorrent seeded until 2.5 ratio or 4.0 depending on my mood.
At the moment I have 90.2 ration on transmission and many many many TB of uploaded stuff. That should be enough to feel like you are giving back
My point: if you’re getting started selfhosting you have to embrace and accept the self-inflicted punishment. Good luck everybody, I don’t know if I can keep choosing to get disappointed.
I would say that your self inflicted punishment is using windows. Switch to debian and thank me in six months
First of all ignore the trends. Fuck docker, fuck nixos, fuck terraform or whatever tech stack gets shilled constantly.
Find a tech stack that is easy FOR YOU and settle on that. I haven’t changed technologies for 4 years now and feel like everything can fit in my head.
Second of all, look at the other people using commercial services and see how stressed they are. Google banned my account, youtube has ads all the time, the app for service X changed and it’s unusable and so on.
Nothing comes for free in terms of time and mental baggage