❌

Normal view

There are new articles available, click to refresh the page.
Yesterday β€” 7 July 2024Security Boulevard

Paperclip Maximizers, Artificial Intelligence and Natural Stupidity

By: davehull
7 July 2024 at 12:57
Article from MIT Technology Review -- How existential risk became the biggest meme in AI
Existential risk from AI

Some believe an existential risk accompanies the development or emergence of artificial general intelligence (AGI). Quantifying the probability of this risk is a hard problem, to say nothing of calculating the probabilities of the many non-existential risks that may merely delay civilization's progress.

AI systems as we have known them have been mostly application specific expert systems, programmed to parse inputs, apply some math, and return useful derivatives of the inputs. These systems are different than non-AI applications because they apply the inputs they receive, and the information they produce to future decisions. It's almost as if the machine were learning.

An example of a single purpose expert system is Spambayes. Spambayes is based on an idea of Paul Graham's. Its an open source project that applies supervised machine learning and Bayesian probabilities to calculate the likelihood that a given email is spam or not spam also known as ham. Spambayes parses emails, applies an algorithm to the contents of a given email and produces a probability that the message is spam or ham.

The user of the email account with Spambayes can read the messages and train the expert system by changing the classification of any given message from spam to ham or ham to spam. These human corrections cause the application to update the probabilities that given word combinations, spelling errors, typos, links, etc., occur in spammy or hammy messages.

Application specific expert systems are a form of artificial intelligence, but they are narrowly focused and not general purpose. They are good at one thing and don't have the flexibility to go from classifying spam messages to executing arbitrary tasks.

Artificial intelligence systems have been around for decades and there's been no realized existential risks, what makes artificial general intelligent systems so problematic?

AI pessimists believe AGI systems are dangerous because they will be smarter and faster than humans, and capable of mastering new skills. If these systems aren't "aligned" with human interests, they may pursue their own objectives at the expense of everything else. This could even happen by accident.

Hypothetically, let's say an AGI system is tasked with curing cancer. Because this system is capable of performing any "thinking" related task, it may dedicate cycles to figuring out how it can cure cancer more quickly. Perhaps it concludes it needs more general purpose computers on which to run its algorithm.

In its effort to add more compute, it catalogs and learns how to exploit all of the known remote code execution vulnerabilities and uses this knowledge to both exploit vulnerable systems, and to discover new exploits. Eventually it is capable of taking over all general purpose computers and tasking them with running its distributed cancer cure finding algorithm.

Unfortunately all general purpose computers including ones like the one on which you're likely reading this post, many safety-critical systems, emergency management and dispatch systems, logistics systems, smart televisions and phones all cease to perform their original programming in favor of finding the cure for cancer.

Billions of people die of dysentery and dehydration as water treatment systems cease performing their
primary functions. Industrial farming systems collapse and starvation spreads. Chaos reigns in major urban areas, as riots, looting, and fires rage until the fuel that drives them is left smoldering. The skies turn black over most cities worldwide.


Scenarios like this one are similar to the idea of the paperclip maximizer, which is a thought experiment proposed by Nick Bostrom wherein a powerful AI system is built to maximize the number of paperclips in the universe, which leads to the destruction of humanity who have to be eliminated because they may turn off the system and they are made of atoms that may be useful in the construction of paperclips.

Some people think this is ridiculous. They'll just unplug the damn computer, but remember, this is a computer that *thinks* thousands of times faster than you. It can anticipate 100s of 1000s of your next moves and ways to thwart them before you even think of one next move. And it's not just a computer, it's now all general purpose computers that it has appropriated. The system would anticipate that humans would try and shut it down and would think through all the ways it could prevent that action. Ironically, in its effort to find a cure for cancer in humans, the system becomes a cancer on general purpose computing.


Do I think any of this is possible? In short, no. I'm not an expert in artificial intelligence or machine learning. I've worked in tech for more than 30 years and played with computers for more than 40 now. During that time I've been a hobbyist programmer, a computer science student, a sysadmin, a database admin, a developer, and I've mostly worked in security incident response and detection engineering roles. I've worked with experts in ML and AI. I've worked on complex systems with massive scale.

I'm skeptical that humans will create AGI, let alone an AGI capable of taking over all the general purpose computing resources in the world as in my hypothetical scenario. Large complex software projects are extremely difficult and they are subject to the same entropy as everything else. Hard drives fail, capacitors blow out, electrical surges fry electrical components like network switches. Power goes out, generators fail or run out of fuel and entire data centers go offline. Failure is inevitable. Rust never sleeps.

Mystifying advances in AI will continue. These systems may radically change how we live and work, for better and worse, which is a long-winded way of saying the non-existential risks are greater than the existential risk. The benefits of these advances outweigh the risks. Large language models have already demonstrated that they can make an average programmer more efficient and I think we're in the very early innings with these technologies.
In the nearer term, it's more likely human suffering related to AGI comes from conflict over the technology's inputs rather than as a result of its outputs. Taiwan Semiconductor (TSMC) produces most of the chips that drive AI and potentially AGI systems. China recognizes the strategic importance of Taiwan (TSMC included) and is pushing for reunification. Given China's global economic power, geographic proximity, and cultural ties, reunification feels inevitable, but also unlikely to happen without tragic loss of life. Escalation of that conflict presents an existential risk in more immediate need of mitigation than dreams of AGI.

The post Paperclip Maximizers, Artificial Intelligence and Natural Stupidity appeared first on Security Boulevard.

Before yesterdaySecurity Boulevard

The End of Our Dog Era

By: davehull
23 June 2024 at 22:12

Β "That's the end of our Joplin era," my wife said to my oldest daughter.

We were still crying and wiping our tears.

I didn't say it out loud, but I thought "That was the end of our dog era,"

We'd just returned to the car from the vet's office where the three of us, through tears, accompanied our 15 year old black lab to the end of her life.Β 

Joplin had been the runt of her mother's litter. She was a black lab in a mixed litter of black and yellow labs. We picked her out before she was weaned and returned to the farm where she was born to bring her home a few weeks later.

When we brought her home she could be held in one hand. She was initially confined to the kitchen as we introduced her to her feline siblings and we started on the house training. At night she whimpered and cried. I slept through it, but my wife found herself laying on the kitchen floor next to Joplin comforting her so that they both could sleep.

Joplin was a good dog. Loyal, protective, affectionate, but not annoyingly so, playful well beyond her years. Though she was a black lab, she was not a lover of the water. She was never a swimmer. She was legs with lungs. She could run, and run, and run.

She loved open fields and the off-leash dog park.

She took thousands of walks over the years. Our routine for most of her life was to walk from our house through downtown and back, a three mile loop.

When we moved to Sammamish, Washington in 2012, she was three years old. She flew from Kansas City to Washington in the cargo hold of a plane with her two sibling cats, each in their own crate. I picked her up from the cargo place at Seatac. She was stressed from the journey.

I brought her home to temporary housing in Redmond where I was living alone, waiting for my family to make the journey in a couple weeks. It was 45ΒΊ F and drizzling when I walked her around the grounds of the apartment complex.

When I let her into the apartment, she immediately shit on the floor. She'd never done anything like that before and never did again.

She endured Washington's winters, 45ΒΊ F, drizzling rain for nine months and adored Washington's summers.

In Sammamish we didn't live near downtown anymore. Sammamish didn't have a downtown. It was a bedroom community with strip malls. It was a beautiful place, usually 45ΒΊ F and drizzling rain, except in the summer when it probably has the best weather on the planet.

There was a good size lily pad pound in our neighborhood. One of the areas many retention ponds. Joplin loved visiting that pond, from the water's edge. Our neighborhood was filled with the best people and a web of walking trails wove the neighborhood to a central park and pool. Joplin loved those trails and that park.Β Β 

After nearly five years, we moved back to the midwest during what was supposed to be a vacation. We boarded Joplin, though she was a 75 pound black lab, the staff at Dogs-a-Jammin, said she liked to play with the smaller dogs.

We drove from Sammamish to Lawrence, Ks to visit our family one summer and when we got there, we decided we should move back. Our families were there. My parents lived in a tiny town 60 miles southwest of Wichita. My dad had had a couple back surgeries in as many years and wasn't doing great.

We told the kids. We drove back to Sammamish earlier than planned and packed everything they would need for the move back to Kansas. We drove back to Kansas. I flew back to Seattle and got the house ready to go on the market and started packing our remaining things.

I drove our Honda Pilot from Sammamish to Lawrence with two very frightened, annoyed and annoying cats. I flew back to Sammamish.

I finished packing our things in the back of a Ryder truck with a car in tow.

Joplin rode in the bed of that Ryder truck with me. For a few days she paced back and forth in the front seat. Hot breath in my face, then head out the passenger door. We slept in rest stop parking lots among the semis. She was a good traveler. She never complained about my driving.

We moved back into our old neighborhood and resumed our daily walks through downtown. Until she got to where she couldn't cover that distance anymore. She would leave the house with vigor and return laggardly. She was slowing down.

Our walks became short walks around the blocks in our neighborhood. She loved going to the middle-school down the street and running around without her leash on, but the long walks were a thing of the past.

Arthritis and inflammation set in. She did well under anti-inflammatory medication and suffered without it. We started asking ourselves, "Do you think today was a good day for Joplin?" On mornings when she was slow to get up, we would look carefully at her to confirm that she was breathing.

Walks became leisurely strolls up and down the block and then just around the house. She had occasional seizures, but would quickly recover from them. Through it all she still seemed to enjoy life. She grew more tolerant of the cats who loved to attack her wagging tail.

A couple weeks ago she collapsed in our dining room and went into a seizure. I picked her up and carried her into the living room and comforted her. She got up and walked to the back door on her own. I let her out and her legs gave out on her, she face planted and seized again. I went to her and reassured her that everything was going to be alright.

But everything wasn't going to be alright. The scales had rapidly tipped in favor of bad days and at 15, she was unlikely to tilt the scale in the other direction.

She recovered and then collapsed in the yard again and seized again.

I told my wife what was happening and reminded her that I would be traveling soon and that it seemed the time had come. She hesitantly agreed. I called the vet. We cried.

The next day we all spent time with Joplin individually. I told her that she'd been a great member of our family and I thanked her for 15 years full of wonderful memories.

She collapsed and seized again the next day before we got her to the vet. I carried her to the car and put her in. My oldest daughter sat in the back of the car with her.

When we arrived at the vet, I lifted her out of the car. She walked toward the door of the clinic, collapsed and seized again.

I think that was her way of letting us know that it was indeed time and that we were doing the right thing to relieve her suffering.

The vet was kind and compassionate. Joplin was made comfortable on a quilt my grandmother had made from polyester pant suits. It was the same quilt that I put over the bench seat of the Ryder truck when Joplin sat next to me for the two plus day road trip from Seattle to Lawrence.

Joplin breathed her last breath. We all cried. We all miss her.Β 

It was the end of our Joplin era, the end of our dog era.

The post The End of Our Dog Era appeared first on Security Boulevard.

❌
❌