• 2 Posts
  • 142 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle

  • This really reads to me like the perspective of a business major whose only concept of productivity is about what looks good on paper. He seems to think it’s a desirable goal for EVERY project to be completed with 0 latency. That’s absurd. If every single incoming requirement is a “top priority, this needs to go out as soon as possible” that’s a management failure. They either need to ACTUALLY prioritize requirements properly, or they need to bring in more people.

    For the Chuck and Patty example, he describes Chuck finishing a task and sending it to Patty for review, and Patty not picking it up because she’s “busy.” Busy with what? If this task is the higher priority, why is she not switching to it as soon as it’s ready? Do either Chuck or Patty not know that this task is the current highest priority? Sounds like management failure. Is there not a system in place (whether automatic or not) for notifying people when high priority tasks are assigned? Also sounds like management failure. Is Patty just incapable of switching tasks within 30-60 minutes? She needs to work on her organization skills, or that management isn’t providing sufficient tooling for multitasking.

    When a top-priority “this needs to go out ASAP” task is in play on my team, I’m either working on it, or I know it’s coming my way soon, and who it’s coming from, because my Project Lead has already coordinated that among all of us. Because that’s her job.

    From the article…

    Project A should take around 2 weeks

    Project B should take around 2 weeks

    That’s 4 weeks to complete them both

    But only if they’re done in sequence!

    If you try to do them at the same time, with the same team, don’t be surprised if it ends up taking 6 weeks!

    Nonsense. If these are both top priorities, and the team has proper leadership, (and the 2 week estimate is actually accurate) 4 weeks is entirely achievable. If these are not top priorities, and the team has other work as well, then yeah, no shit it might be 6 weeks. You can’t just ignore the 2 weeks from Project C if it’s prioritized similarly to A and B. If A and B NEED to go out in 4 weeks, then prioritize them higher, and coordinate your team to make that happen.


  • A quality apology consists of 3 things:

    • An explanation of what you did that was wrong, and why it was wrong
    • An explanation of what you’re going to try and change about yourself, to avoid the same mistake
    • An expression of remose. I.E. the word “sorry” or “apologize”.

    Your proposed apology has all those elements, so you’re already ahead of most folks. But there are a few suggestions for improvement in this thread that I think are also good.

    “if you felt so, I apologize”: I don’t read this as you apologizing for how the other person feels, since you clarified that earlier. But I think it’s fair that others might read it that way, so you’re better off eliminating the ambiguity. You’re apologizing for what you did, without considering that others might (validly) consider it inappropriate.

    “I’ll try to control myself around you”: similar deal, it should be clear that this is about you, not them. And when it comes to swearing in a workplace, it’s pretty-darn common to consider it inappropriate and unprofessional, no matter who you’re around. Maybe part of your apology needs to focus on how the behavior is unprofessional, and you simply needed help recognizing that, as you’re (possibly?) new to the professional working world.






  • As I understand it (and assuming you know what asymmetric keys are)…

    It’s about using public/private key pairs and swapping them in wherever you would use a password. Except, passwords are things users can actually remember in their head, and are short enough to be typed in to a UI. Asymmetric keys are neither of these things, so trying to actually implement passkeys means solving this newly-created problem of “how the hell do users manage them” and the tech world seems to be collectively failing to realize that the benefit isn’t worth the cost. That last bit is subjective opinion, of course, but I’ve yet to see any end-users actually be enthusiastic about passkeys.

    If that’s still flying over your head, there’s a direct real-world corollary that you’re probably already familiar with, but I haven’t seen mentioned yet: Chip-enabled Credit Cards. Chip cards still use symmetric cryptography, instead of asymmetric, but the “proper” implementation of passkeys, in my mind, would be basically chip cards. The card keeps your public/private key pair on it, with embedded circuitry that allows it to do encryption with the private key, without ever having to expose it. Of course, the problem would be the same as the problem with chip cards in the US, the one that quite nearly killed the existence of them: everyone that wants to support or use passkeys would then need to have a passkey reader, that you plug into when you want to login somewhere. We could probably make a lot of headway on this by just using USB, but that would make passkey cards more complicated, more expensive, and more prone to being damaged over time. Plus, that doesn’t really help people wanting to login to shit with their phones.


  • Automated certificate lifecycle management is going to be the norm for businesses moving forward.

    This seems counter-intuitive to the goal of “improving internet security”. Automation is a double-edged sword. Convenient, sure, but also an attack vector, one where malicious activity is less likely to be noticed, because actual people aren’t involved in tbe process, anymore.

    We’ve got ample evidence of this kinda thing with passwords: increasing complexity requirements and lifetime requirements improves security, only up to a point. Push it too far, and it actually ends up DECREASING security, because it encourages bad practices to get around the increased burden of implementation.





  • It’s the capability of a program to “reflect” upon itself, I.E. to inspect and understand its own code.

    As an example, In C# you can write a class…

    public class MyClass
    {
        public void MyMethod()
        {
            ...
        }
    }
    

    …and you can create an instance of it, and use it, like this…

    var myClass = new MyClass();
    myClass.MyMethod();
    

    Simple enough, nothing we haven’t all seen before.

    But you can do the same thing with reflection, as such…

    var type = System.Reflection.Assembly.GetExecutingAssembly()
        .GetType("MyClass");
    
    var constructor = type.GetConstructor(Array.Empty<Type>());
    
    var instance = constructor.Invoke(Array.Empty<Object>());
    
    var method = type.GetMethod("MyMethod");
    
    var delegate = method.CreateDelegate(typeof(Action), instance);
    
    delegate.DynamicInvoke(Array.Empty<object>());
    

    Obnoxious and verbose and tossing basically all type safety out the window, but it does enable some pretty crazy interesting things. Like self-discovery and dynamic loading of plugins, or self-configuration of apps. Also often useful when messing with generics. I could dig up some practical use-cases, if you’re curious.




  • I think the big reasons for most people boil down to one or both of two things:

    A) People having 0 trust in Google. I.E. people do not believe that paying for their services will exempt them from being exploited, so what’s the point?

    B) YouTube’s treatment of its content creators. Which are what people actually come to YouTube for. Advertisers and copyright holders (and copyright trolls) get first-class treatment, while the majority of content creators get little to no support for anything.