Do I read this right? That comic is from 2011?
yes
That’s normal evolution, go ask a biologist about junk DNA.
The idea of junk DNA is based on the fact that it doesn’t code for any proteins, but many other functions have been found for it (small nuclear RNA, microRNA, small interfering RNA, etc.). Some parts of the genome are not transcribed into anything but still have a functional purpose, such as in telomere caps and in folding. And there are large parts with no known purpose, they might be remnants of working genes, and they might have a function in evolution (see the “might”). One research project (Encode) found that around 80% of the human genome is transcribed, but the argument against this is that DNA being transcribed may not necessarily mean it has a function. The theory of junk DNA hasn’t diseapered but it isn’t necessarily true either.
I am just a student, so take my info with a pinch of salt.
For real, I thought he was going there. Like : the AI keeps trying to fight off the coders messing with its perfect code so it keeps generating junk code to protect the actual code.
I have a similar theory about star trek. In one scene there was a blurry picture and to sharpen it Ricker said: “Computer implement recursive Algorithm”. That is equivalent to “Computer do something”. So now my theory is that there is an intelligent ship with a genius AI that carries around humans that have regressed to toddler intelligence because the AI does everything for them.
The ship is basically human daycare with lots of blinking buttons and moving pictures to keep the humans occupied while the ship does the actual (and probably boring) science.
Starfleet Academy is basically teaching them technobabble and looking great in a uniform while the AIs do the real work.
The Culture novels by Iain M. Banks basically have this premise. There are super intelligent AIs called Minds that are pretty much gods who run everything, and their civilisation (the Culture) is a utopia for anyone who lives in it. Minds control the ships, which sometimes have crews but they’re described as “somewhere between passengers, pets and parasites” in terms of how useful they actually are lol
Ok i doubt anyone is going to be willing to have this discussion, but here i am. My assessment is as follows: it would seem to me that to be of value, “ai” doesnt need to be perfect, it just needs to be better than the average programmer. If it can produce the same quality code twice as fast, or if it can produce code thats twice as good in the same amount of time. If I want it to code me a video game, i would personally judge it by how well it does against what i would expect from human programmers. Currently there is no comparison, im no coding expert but even i find myself correcting ai for even the simplest of code, but thats only temporary. Ten years ago this tech didnt even exist, ten years from now(assuming it doesnt crash our economy in more ways than one) i would imagine the software will at least be comparable to an entry level programmer.
I guess what Im getting at is that people rail against ai for faults that a human would make worse. Like self driving cars, having seen human drivers i am definitely wanting that tech to work out. Obviously its ideal for it to be perfect and coordinate with other smart cars to reduce traffic loads and inprove safety for everyone. But as long as its safer than a human driver, i would prefer it. As long as it codes better than your average overworked unpaid programmer, it becomes a useful tool.
That being said, I do see tons of legitimate reasons to dislike AI, especially in its current form. A lot, id say most, of those issues dont actually lie with AI at all, or even with llms. Most of the issues ive heard with AI development are actually thinly veiled complaints about capitalism, which is objectively failing even without AI. The others are mostly complaints about the current state of the tech, which i find to be less valid. Its like complaining that your original ipod didnt have lidar built in like they do now. Nixing the capitalism issue about how this tech will be used, and how its currently being funded, and its environmental impacts, and the fact that this level of research is unsustainable and will collapse the economy, give the tech time and it will mature. That almost feels like sarcasm given those very real issues, but again, those are all capitlism issues. If we were serious about saving our planet, a guardian AI that automatically drone strikes sorices of intense pollution would go a long way. If youre worried about robots takin yer jerbs, try not being capitalism-pilled and realise that humans got by for eons without jobs or class structures. Post scarcity is almost mandatory under proper AI, and capitlism exists to ensure that post scarcity cant happen.
Ok i doubt anyone is going to be willing to have this discussion, but here i am.
You’re right; here I was enjoying a silly comic–written in 2011–about computers turning programming into human daycare, and someone had to turn it into yet another excuse to start talking about AI, as if we didn’t have enough of that!


