or fixing windows by only using WSL and reading the arch wiki
Pup Biru
- 0 Posts
- 470 Comments
that’s a good and bad thing though…
it’s easy to reference code, so it leads to tight coupling
it’s easy to reference code, so let’s pull this out into a separately testable, well-documented, reusable library
my main reason for ever using a monorepo is to separate out a bunch of shared libraries into real libraries, and still be able to have eg HMR
google does a lot of things that just aren’t realistic for the large majority of cases
before kubernetes, you couldn’t just reference borg and say “well google does it” and call it a day
i’d say it’s less that it’s inadequate, and more that it’s complex
for a small team, build a monolith and don’t worry
for a medium team, you’ll want to split your code into discreet parts (libraries shared across different parts of your codebase, services with discreet test boundaries, etc)… but you still need coordination of changes across all those things, and team members will probably be touching every part of the codebase at some point
for large teams, you want to take those discreet parts and make them fairly independent, and able to be managed separately: different languages, different deployment patterns, different test frameworks, heck even different infrastructure
a monorepo is a shit version of real, robust tooling in many categories… it’s quick to setup, and allows you a path to easily change to better tooling when it’s needed
You should really not need to do a PR across multiple repos.
different ways of treating PRs… it’s a perfectly valid strategy to say “a PR implements a specific feature”, in which case you might work in a backend, a front end, and library… of course, those PRs aren’t intrinsically linked (though they do have dependencies between them… heck i wouldn’t even say it’d be uncommon or wrong for the library to have schemas that do require changes in both the fronted and backend)
if you implement something in eg the backend, and then get retasked with something else, or the feature gets dropped then sure it’s “working” still, but to leave unused code like that would be pretty bad… backend and front end PRs tend to be fairly closely tied to each other
a monorepo does far more than i think you think it does… it’s a relatively low-infrastructure way of adding internal libraries shared across different parts of your codebase, external libraries without duplication (and ensuring versions are consistent, where required), and coordinating changes, and plenty more
can these things be achieved with build systems and deployment tooling? absolutely… but if you’re just a small team, a monorepo could be the right call
of course, once the team grows in size it’s no longer the correct option… real tooling is probably going to be faster and better in every way… but a monorepo allows you to choose when to replace different parts of the process… it emulates an environment with everything very separated
i’d say they’re pretty equivalent
a monorepo is far easier to develop a single-language, fairly monolithic (ie you need the whole application to develop any part) codebase in
(though as soon as you start adding multiple languages or it gets big enough that you need to work on parts without starting other parts of the application it starts to break down rather significantly)
but as soon as your app becomes less of a cohesive thing and more separated it becomes problematic… especially when it comes to deployments: a push to a repo doesn’t mean “deploy changes to everything” or “build everything” any more
i think the best solution (as with most things) is somewhere in the middle: perhaps several different repos, and a “monorepo” that’s mostly a bunch of subtrees or submodules… you can coordinate changes by committing to the monorepo (and changes are automatically duplicated), or just work on individual parts (tricky with pnpm since the workspace file would be in the monorepo)… but i’ve never really tried this: just had the thought for a while
Pup Biru@aussie.zoneto
Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ@lemmy.dbzer0.com•What risk might I have accidentally exposed my computer to by viewing a pirated streaming site without AV blocking?English
4·2 days agothe zip file itself might also be generated (you can just tack random garbage into places in the zip format and it’ll be ignored - which is extremely quick to do), in which case the hash would change… the file itself is important in case it’s an exploit in the unzip program itself, but also the contents of the file is important
Pup Biru@aussie.zoneto
Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ@lemmy.dbzer0.com•What risk might I have accidentally exposed my computer to by viewing a pirated streaming site without AV blocking?English
7·2 days agonot entirely true. if the file downloaded, windows does a bunch of “helpful” things with files… these are almost certainly benign (eg rendering thumbnails, getting metadata about certain file types) but almost anything is potentially exploitable (eg overflow in thumbnail generation code could lead to code execution just from browsing a website and then opening your downloads folder in explorer)
drive-by attacks don’t just effect the browser
with that said, it’d be a huge deal if this was the reality of the situation… it’s highly unlikely, but zero days exist, and the possibility is always real
i say this because this has been exploited in the past with exactly the same scenario: preview generation
Pup Biru@aussie.zoneto
Today I Learned@lemmy.world•TIL: Young Men Ages 18–29 are Turning Right-Wing and Women of the Same Age Turning Left-WingEnglish
143·9 days agoor more likely imo their inability to say what they want and have a compliant tradwife despite their shit behaviour
Pup Biru@aussie.zoneto
Political Weirdos @lemmy.world•Please speak slower and louder because Dementia Don doesn't understandEnglish
21·9 days agoand somehow still an improvement
Pup Biru@aussie.zonetoData is Beautiful@lemmy.world•Passenger deaths per 1 billion passenger miles (2000-2009)English
111·9 days agohours doesn’t come as close to the metric that you’d like though
the purpose of travel is to get from point a to point b, so you want to measure the likelihood of death when travelling the comparable trips
hours doesn’t really work because different modes of transport complete the trip in very different times. distance however is relatively similar
Pup Biru@aussie.zoneto
Fuck AI@lemmy.world•I Set A Trap To Catch Students Cheating With AI. The Result Was DeflatingEnglish
2·11 days agothat is an unrealistic solution
now that’s what i call molecular biology!
Pup Biru@aussie.zoneto
Technology@lemmy.world•Gmail users warned to opt out of new feature - what we knowEnglish
10·15 days agoi closed reader view and scrolled just to see and wow the POPUPS and 50% of the page length being ads
WHAT
who uses the internet like this and finds it acceptable?!
Pup Biru@aussie.zoneto
Technology@lemmy.world•Microsoft finally admits almost all major Windows 11 core features are brokenEnglish
4·16 days agobuys you a little extra time to move to linux
Pup Biru@aussie.zoneto
People Twitter@sh.itjust.works•Are you being ripped off? Good luck!English
2·16 days agoyup they show price per sheet by law
Pup Biru@aussie.zoneto
People Twitter@sh.itjust.works•Are you being ripped off? Good luck!English
2·17 days agoditto! i’d probably do it in my head for a lot of things still because metric is easy, but it saves me so much time and i’m sure i’m an outlier
Pup Biru@aussie.zoneto
People Twitter@sh.itjust.works•Are you being ripped off? Good luck!English
321·17 days agoyknow what’s great? unit pricing laws
tldr: in australia businesses must display “unit price” on labels: price per 100g, per 100ml, per sheet, etc for every product so that packages are comparable
clay is just as likely to drive as to catch public transport




most things scale if you throw enough resources at them. we generally say that things don’t scale if the majority case doesn’t scale… it costs far fewer resources to scale with multiple repos that it does to scale a monorepo, thus monorepo doesn’t scale: i’d argue even the google case proves that… they’ve already sunk so much into dev tooling to make it work… it might be beneficial to the culture (in that they like engineers to work across the entire google codebase), but it’s not a decision made because it scales: scale is an impediment