• 0 Posts
  • 12 Comments
Joined 2 years ago
cake
Cake day: September 2nd, 2023

help-circle

  • calcopiritus@lemmy.worldtoMicroblog Memes@lemmy.worldMisplaced Priorities
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 days ago

    Well. Doing everything costs energy.

    If you only display hours and minutes, you only need to redraw the clock every minute. If you display the seconds, you have to redraw every second.

    And redrawing is not just changing some pixels. The Os scheduler has to wake up the process, put another to sleep. And all of that costs power.

    Yes, the clock in the taskbar probably uses less power in an hour than watching a YouTube video will consume in a second.

    But it is still a 60x increase in power usage.

    This warning is actually a good thing. It means whoever implemented the seconds in the taskbar clock actually has the right mindset for developing an operating system. You don’t use a JavaScript framework to develop the start menu because “all the juniors come from JavaScript boot camps so it’s cheap to hire” you want someone that knows that the OS’ job is to provide a strong base layer that uses little resources so great things can be built on top of it.

    Of all the things you could complain about windows, you chose to do it about the actually good ones, when you could go instead for the “30% vibe coded” codebase with a JavaScript UI that can’t even implement a “power off” button.


  • Well, according to what I read on the internet, everything works out of the box for Linux and this year is the year of the Linux desktop. But according to personal experience and most of the people I know IRL, there is always someone that spent last weekend fixing some weird thing that doesn’t work on their system in Linux that used to work correctly when they used windows instead.






  • I know this thread is old. But I disagree with you.

    I agree that depending on how you use a debugger, some race conditions might not happen.

    However, I don’t agree that debuggers are useless to fix race conditions.

    I have a great example that happened to me to prove my point:

    As I was using a debugger to fix a normal bug, another quite strange unknown bug happened. That other bug was indeed a race condition. I just never encountered it.

    The issue was basically:

    1. A request to initiate a session arrives
    2. That request takes so long that the endpoint decides to shut down the session
    3. A request to end the session arrives

    And so handling the session start and session end at the same time resulted in a bug. It was more complicated than this (we do use mutexes) but it was along those lines.

    We develop in a lab-like condition with fast networking and computers, so this issue cannot happen on its own. But due to the breakpoint I put in the session initiation function, I was able to observe it. But in a real world scenario it is something that may happen.

    Not only that, I could reproduce the “incredibly rare” race condition 100% of the time. I just needed to place a breakpoint in the correct place and wait for some amount of time.

    Could this be done without a debugger? Most of the time yes, just put a sleep call in there. Would I have found this issue without a debugger? Not at all.

    An even better example:

    Deadlocks.

    How do you fix a deadlock? You run the program under a debugger and make the deadlock happen. You then look at which threads are waiting at a lock call and there’s your answer. It’s as simple as that.

    How do you print-debug a deadlock? Put a log before and after each lock in the program and look at unpaired logs? Sounds like a terrible experience. Some programs have thousands of lock calls. And some do them at tens of times per second. Additionally, the time needed to print those logs changes the behaviour of the program itself and may make the deadlock harder to reproduce.