• 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle

  • To me 16 is long haha.

    I usually end up running with 16 characters since a lot of services reject longer than 20 and as a programmer I just like it when things are a power of two. Back in the Dark Times of remembering passwords my longest was 13 characters so when I started using a password manager setting them that long felt wild to me.

    I do have my bank accounts under a 64 character password purely because monkey brain like seeing big security rating in keepass. Entropy go brrrrrrrrrrrr


  • I’ve used cloud based services for password managers for work and “self host” my personal stuff. I barely consider it self hosting since I use Keepass and on every machine it’s configured to keep a local cached copy of the database but primarily to pull from the database file on my in-home NAS.

    Two issues I’ve had:

    Logging into an account on a device currently not on my home network is brutal. I often resort to simply viewing the needed password and painstakingly type it in (and I run with loooooong passwords)

    If I add or change a password on a desktop and don’t sync my phone before I leave, I get locked out of accounts. Two years rocking this setup it’s happened three times, twice I just said meh I don’t really need to do this now, a third time I went through account recovery and set a new password from my phone.

    Minor complaint:

    Sometimes Keepass2Android gets stuck trying to open the remote database and I have to let it sit and timeout (5 minutes!!!) which gets really annoying but happens very infrequently which is why I say just minor complaint

    All in all, I find the inconvenience of doing the personal setup so low that to me even a $10 annual subscription is not worth it



  • For graphics, the problem to be solved is that the N64 compiled code is expecting that if it puts value X at memory address Y it will draw a particular pixel in a particular way.

    Emulators solve this problem by having a virtual CPU execute the game code (kinda difficult), and then emulator code reads the virtual memory space the game code is interacting with (easy), interprets those values (stupid crazy hard), and replicates the graphical effects using custom code/modern graphics API (kinda difficult).

    This program is decompiling the N64 code (easy), searches for known function calls that interact with the N64 GPU (easy), swaps them with known valid modern graphics API calls (easy), then compiles for local machine (easy). Knowing what function signatures to look for and what to replace them with in the general case is basically downright impossible, but because a lot of N64 games used common code, if you go through the laborious process for one game, you get a bunch extra for free or way less effort.

    As one of my favorite engineering phrases goes: the devil is in the details


  • MajorasMaskForever@lemmy.worldtoProgramming@programming.dev...
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    7 months ago

    Ada

    It has a lot of really nice features for creating data types and has amazing static analysis during compile time.

    But all the tooling around it is absolute crap making using the language unbearable and truly awful. If it had better tooling I could see that it would have taken a decent chunk of development away from C and C++


  • As someone who is in the aerospace industry and has dealt with safety critical code with NASA oversight, it’s a little disingenuous to pin NASA’s coding standards entirely on attempting to make things memory safe. It’s part of it, yeah, but it’s a very small part. There are a ton of other things that NASA is trying to protect for.

    Plus, Rust doesn’t solve the underlying problem that NASA is looking to prevent in banning the C++ standard library. Part of it is DO-178 compliance (or lack thereof) the other part is that dynamic memory has the potential to cause all sorts of problems on resource constrained embedded systems. Statically analyzing dynamic memory usage is virtually impossible, testing for it gets cost prohibitive real quick, it’s just easier to blanket statement ban the STL.

    Also, writing memory safe code honestly isn’t that hard. It just requires a different approach to problem solving, that just like any other design pattern, once you learn and get used to it, is easy.



  • So many people forget that while they understand how to use a Linux terminal and how Linux on a high level works, not everyone does. Plus, learning all of that takes time, effort, and tenacity, which not everyone is willing to do. Linus’s whole conclusion was that as long as that learning curve exists and as long as it’s that easy to shoot yourself in the foot, Linux desktop just isn’t viable for a lot of people.

    But Linus has done a lot of public fuck ups therefore everything he says must be inherently wrong.


  • I think part of the “what do I do with this” factor for the iPad was that Apple (and other companies still to this day) were so hell bent on making everything smaller and more compact that releasing a larger product was marketing whiplash. Not to mention that smartphones were being pitched as this “do everything device” so why would you need anything else?

    After you get over that marketing sugarcoating, it becomes pretty obvious what you’d use an iPad for. Internet and media consumption at a larger scale than your phone, easier on your eyes than a phone, but retains at least some of the lightweight smaller form factor that separates it from a regular laptop. Sure you didn’t have the stick it in your pocket advantage of a phone or the full keyboard and computational power of a laptop, but there was this in-between that for a modest fee, you could have the conveniences if you can live with/ignore the sacrifices.


  • I don’t think the MacBook Airs launch is a good comparison.

    Sure there was an early adopter tax on being one of the first “thin and light” laptops, but people already know what you can use a MacBook for, there was already a large value proposition in having a MacBook, the extra cost was entirely being more portable than it’s full size counterparts. Everything you can do on a Mac, just way easier to take on the go.

    I’ve read a few reviews on it, watched MKBHD’s initial review, and outside of a few demo apps they point to the vision pro having no real point to it. Which if true, then it falls in line with existing VR headsets that are a fraction of it’s cost and in a niche market, being three times the cost of your competitors is not a good position to be



  • The issue is that with ongoing service across time, the longer the service is being used the more it costs Kia. The larger the time boxes Kia uses the bigger the number is and the more you’re going to scare off customers.

    Using Kias online build and price, looks like the most expensive Telluride you can get right now is $60k MSRP, cheapest at 30k

    Let’s assume Kia estimates average lifetime of a Telluride to be 20 years so they create an option to purchase this service one time for the “lifetime” of the vehicle. Taking in good faith the pricing Kia has listed, using that $150 annual package, and assuming that price goes up every year at a rate of 10% (what Netflix, YouTube, etc have been doing) across those twenty years you’re looking at around $8.5k option. At the top trim thats still 14% extra that is going to make some buyers hesitant, at the base model that’s 28% more expensive.

    Enough buyers will scoff at that so Kia can either ditch the idea entirely as they’ll lose money on having to pay for the initial development and never make their money back, or they find some way to repackage that cost and make it look like something that buyers are willing to deal with.

    To me the bigger issue is the cost of the service vs what you’re getting. Server time + dev team + mobile data link cannot be costing Kia more than a few million annually, mid to upper hundred K is more likely so they must not be expecting that many people to actually be paying for any of this


  • In pure C things are a bit different from what you describe.

    Declaration has (annoyingly) multiple definitions depending on the context. The most basic one is when you are creating an instance of a variable, you are telling the compiler that you want a variable with symbol name X, data type Y, and qualifiers A,B and C. During compilation the compiler will read that and start reserving memory for the linker to assign later. These statements are always in the form of “qualifiers data_type symbol;”

    Function declaration is a bit different, here you’re telling the compiler “hey you’re going to see this function show up later. Here are the types for arguments and return. I pinky swear promise you’ll get a definition somewhere else”. You can compile without the definition but the linker will get real unhappy if you don’t have the definition when it’s trying to run. Here you’re looking at a statement of “qualifiers return_data_type symbol(arg_1_data_type arg_1_symbol,…);” Technically in function declarations you don’t need argument symbols, just the types, but it’s better to just have them for readability.

    Structs are different still. Here you’re telling the compiler that you’re going to have this struct definition somewhere else in the same translation unit, but the data type symbol will show up before the definition. So whenever the compiler sees that data type show up in a variable instance declaration it won’t reserve space right away but it has to have the struct definition before compilation ends. This is pretty straightforward syntax wise, “struct struct_name;” (Typedefs throw a syntax wrench into this that I won’t get into, it’s functionally the same though)

    One more thing you can do with variables during declaration is to “extern” them. This is more similar to function declaration, where you’re telling the compiler “hey you’re gonna see this symbol pop up, here’s how you use it, but it actually lives somewhere else k thx bye”. I personally don’t like calling this declaration since it behaves differently than normal declaration. This is the same as a normal variable declaration syntax with “extern” tossed in the front of the qualifiers.

    Definitions have two types: Function definitions contain the actual code that gets translated into instructions, Enum, struct, typedef definitions all describe memory requirements when they get used.

    Structs and enums will have syntax like “struct struct_name {blah,blah,blah};”, typedefs are just “typedef new_name old_name;”, and function definition “qualifiers return_data_type symbol(arg_1_data_type arg_1_symbol,…) {Blah,blah,blah}” (note that function definitions don’t need a ; at the end and here you do need argument symbols)

    Lastly, when you create a variable instance, if you say that you want that symbol to have value X all in one statement, by the standard that’s initialization. So “int foo = 5;” is declaration and initialization. Structs and arrays have special initialization syntax, “struct foo bar = {5, 6, 7};” where the numbers you write out in the list gets applied in order of the element names in the struct definition. You can also use named initialization for structs where it would look like “struct foo bar = {. element_one = 5, .e_two = 6, .e_three = 7};” This style syntax is only available for initialization, you cannot use that syntax for any other assignment. In other words you can’t change elements in bulk, you have to do it one at a time.

    C lets you get real wild and combine struct definition, struct instance declaration and initialization all into one! Though if I was your code reviewer I’d reject that for readability.

    <\wall-o-text>



  • Even in P2P you’ll still need someone to go tell you what other IP addresses are in the group that you’re trying to join. And you have to know the IP address of that someone. You’re not going to scan the entire Internet to figure out who all else is attempting to play the exact same game as you, that would take literal days every time (assuming you rule out anyone IPv6, if you include them that suddenly becomes millions of years).

    Even in P2P you will need to hit a commonly known and trusted resource to tell you what other IP addresses you need to go talk to.