The Downfall of Civilization

QUOTE (JacaByte @ Mar 16 2010, 07:46 PM) <{POST_SNAPBACK}>

Well, you'd just need a self replicating machine that would devour the first self replicating machines. Except then you'll have more grey goo on your hands...

How about making it so the neutralizing machines would only replicate when they consumed one of the trouble machines? That would at least slow down/halt the replicating process of the trouble machines, and stop with a set number of machines altogether.

... but what about bit decay? Wouldn't it eventually fail due to that?

QUOTE (oryhara @ Mar 16 2010, 08:23 PM) <{POST_SNAPBACK}>

... but what about bit decay? Wouldn't it eventually fail due to that?

No. ECC2 uses two parity bits for every byte that can detect and correct an error with one bit of the byte and detect errors with two bits or more, allowing the computer to request a new set of instructions. Your computer does this every day and it handles it so fast you don't notice it.

I place the odds of a futuristic electronic device failing solely due to bit decay at one in a gazillion.

Edit: Okay, I lied; apparently most ECC that is available to the average end user uses a single parity bit, can only detect an error in a byte that has one corrupted bit (errors with 2, 4 or 6 bits will pass the check (this is only 10% of all corrupted bits, by the way)) and today's memory is so solid that an error may be encountered only once every few years. But the odds are still one in a gazillion, assuming our error correcting technology and the stability of our memory improves in the future.

This post has been edited by JacaByte : 16 March 2010 - 10:11 PM

The only trouble I see with the whole bit degradation or copying errors is that you're severely limited to the programming of these nanites simply due to the fact of where to store the information. The most likely candidates for nanotechnology are organically based, using DNA or RNA to store the coding. Copying information on a hard drive is quite different than copying information stored on molecules. The idea of one nanite-producing machine is not a bad one, since that can be controlled more easily. But again, with the control signal thing, there's not a lot of instructions that can be given to the nanites.

It's conceivable that we'll have a nanoscopic storage medium, that's electronic, as soon as 2020. This would bypass any fears related to the degradation inherent to DNA and RNA. (The body still has some incredible mechanisms in place to prevent copying errors, however)

I'm still have my doubts. Most of the changes in DNA/RNA information are not because of copying errors, but because of the odd strike from a cosmic ray or other radiation. I remember hearing a doctor once say that if everyone were to live long enough, almost everyone would eventually get cancer for that reason.

For a nanite to really be effective, it won't be a whole lot bigger than the molecules it's working with. Molecular information storage is a reasonable idea and within engineering grasp at this point, but again, you're just limited to the amount of information that can be stored on something that's only a few dozen atoms. If we ever figure out quantum state information storage and computing, that's a more reasonable solution. The idea of one nanite core device that manufactures the actual nanites might actually be the better solution than a Von Neumann machine that replicates itself.

Between gray goo or AI's Gone Wild, technology sometimes scares the hell out of me.

QUOTE (Eugene Chin @ Mar 16 2010, 08:56 PM) <{POST_SNAPBACK}>

Digital Data is not the same thing as DNA. If it is being copied exactly and correctly, then it should suffer no more degradation than copying a WAV or ALAC file from one machine to another does.

However, files CAN get corrupt when transferring. It happens.

Another way to do it is to require replication in a specific environment, say, the right mix of materials. That way, the nanites physically could not replicate out of control.

That said, a nano-cancer would be a great way to end a galactic civilization.

Ah, the ever popular "Grey Goo" theory of nanotech. Ben Bova uses the concept in Moonrise and Moonwar.

This post has been edited by CaptJosh : 26 March 2010 - 08:48 PM

Hey guys I know what let’s make self-replicating objects that can construct other things too. So they don’t replicate out of control we’ll keep them all on one planet and then when we want something we’ll go to that planet and give them the materials. And to prevent the replicators from spreading we’ll make it so the radiation in space destroys them.

QUOTE (Qaanol @ Mar 27 2010, 03:11 PM) <{POST_SNAPBACK}>

Hey guys I know what let’s make self-replicating objects that can construct other things too. So they don’t replicate out of control we’ll keep them all on one planet and then when we want something we’ll go to that planet and give them the materials. And to prevent the replicators from spreading we’ll make it so the radiation in space destroys them.

_
Then if they do somehow manage to evolve and escape, we'll turn our sun into a black hole to trap them there indefinitely._

It sounds like a bad idea pretty much any way you phrase it, doesn't it?

I think that was the point of the italics.

I'm going to drop a few things into this discussion:

QUOTE (krugeruwsp @ Feb 17 2010, 02:35 PM) <{POST_SNAPBACK}>

In fact, I noticed a Trek continuity error just last night. In "Interface," Geordi's mom's ship is declared missing, 300 light years away from their current position. Geordi's mom's ship passed by there ten days ago. Okay, so they can go 30 light years per day. The next episode the Sci-Fi channel showed claimed that at maximum warp, the Enterprise could go 5 light years per day. Unless the flagship is six times slower than most other ships in the fleet (which I think Geordi would be pretty upset about, given his fussing over a tenth of a percent of power conversion efficiency in the episode!) there's one of thousands of such continuity errors in the show. You'd think in the writer's room, they'd have some sort of convention about important things like that...

I admit I have no canonical answer there, but SFA answered that question quasi-canonically with the invention of the warp conduit. Ms. LaForge's ship probably used one.

QUOTE (n64mon @ Feb 17 2010, 04:24 PM) <{POST_SNAPBACK}>

Hmm... what if humanity flies around at near-light speed, such travel is expensive, i.e. teraforming is found to be easier than traveling long distances? Just about every nearby star would be inhabited. However, you would still see it coming miles away. (ha ha)

The biggest problem is most (i.e. all) cosmic phenomena move at sub-light speeds, and any FTL civilization wouldn't have much difficulty dealing with it. As drama, the size of the universe, and the laws of physics don't seem to play nice for cosmological devastation, lets try kicking up the fiction up a notch. What if a supermassive black hole is somehow traveling faster than the speed of light? Maybe it's surfing a gravity wave from some distant cosmic event, I don't know. If that were the case, you might not see it until it begins to affect one of your populated systems, perhaps in the wake of its passing. What would a gravitational sonic boom be? Anyways, tweak the parameters right and you could have a crisis that happens suddenly, and cause its damage across the galaxy over several years.

I've seen both of those ideas in existing sci-fi universes. The traveling at lightspeed idea brings up Einstein's classic problem of time dilation, where time moves much slower at home than it does for the ship traveling at extremely high speeds. Orson Scott Card's Ender series used this creatively to let a 13-year-old Andrew "Ender" Wiggin command warships that had been en route to their targets for decades from Ender's perspective, but only a couple of weeks from the perspective of the crew. (A few chapters into the second book, Speaker for the Dead , Ender leaves his sister Valentine behind on a planet for a week-long (from his point of view) journey to another, during which Valentine aged at least twenty years.)

The gravity wave surfing idea shows up in David Weber's Honorverse, wherein ships use a gravatic device called a Warshawski sail (after its inventor) to travel at FTL speeds. Long distances can take weeks, but using the sails to surf on gravatic anomalies reduces travel times immensely.

As for me, I envision hyperspace as a sort of anti-space, the "opposite side" of the fabric of spacetime. It's easier to imagine if you just think of space as objects laid out on a table. A ship that initiates the "jump" to hyperspace punches a hole through to the other side, making a temporary artificial wormhole of sorts. When it reaches its destination, it punches back through to normal space.

Getting back to the main topic:

I used the cataclysm-leads-to-end-of-life-as-we-know-it theme at least once in the background of EVN:UGF:

QUOTE (EVN Wiki article )

**The Axe-tail Star Empire

**... About three thousand years ago, the Axe-tails fought a war with the defunct Holy Sathu Commonwealth and lost badly. In the aftermath the Axe-tails fell into a backward state of tribal warfare which lasted two millennia. Then twin priest-queens, Ra'kor and Ravirr, used both diplomacy and military might to unify their people. Ra'kor's descendants lead the Empire now, as Ravirr, though the elder, bore no heirs. First contact with the UGF occurred only seven years before the start of the game, and sadly ended in bloodshed. ...

War is hell, especially if you're on the losing side. The Axe-tails were reduced to squabbling warlords much like parts of Africa and the Middle East are today. After the Twin Princesses unified them, they restored their culture to its original standing in the Andromeda Galaxy, and are now in staunch opposition to any further Galactic expansion northward in that galaxy. They can back their opposition up with firepower that matches anything the UGN can throw at them, so their borders have become a death trap for inattentive captains.

This post has been edited by StarSword : 08 April 2010 - 11:50 AM

If I remember my quantum correctly from college, you can't surf a gravity wave for FTL travel, since gravity waves are subluminal. Really, the only FTL possibility that exists within current physics is the idea of being able to somehow move a pocket of space around your ship. Space itself can move FTL (as per the inflationary theory,) but nothing within space itself can travel FTL.

QUOTE (krugeruwsp @ Apr 8 2010, 12:26 PM) <{POST_SNAPBACK}>

If I remember my quantum correctly from college, you can't surf a gravity wave for FTL travel, since gravity waves are subluminal. Really, the only FTL possibility that exists within current physics is the idea of being able to somehow move a pocket of space around your ship. Space itself can move FTL (as per the inflationary theory,) but nothing within space itself can travel FTL.

I dunno; I'm just telling it like I read it.

QUOTE (krugeruwsp @ Mar 4 2010, 11:54 AM) <{POST_SNAPBACK}>

Now, if Darth is going where I think he is, Von Neumann machines are NOT a good idea. You want to talk civilization ender? Self-replicating machines are just a bad idea. Sure, we can program them to only replicate so many times and then stop, but all it takes is one of them to accidentally corrupt that command and suddenly, these things start taking over the universe. Nanites could be used to assemble things at a molecular or even atomic level, but again, you'd have to find a way to make enough of them to do any good (hence where the idea of the Von Neumann machines come into play.)

I'm reminded of the concept of Friendly AI. The idea being that if you want to create a strong artificial intelligence and not have it turn on you and kill all humans, you figure out a way to program it to value human beings and want to please them, rather than give it a Three Laws sort of thing. The trick is, you have to be careful programming any machine that has the capability to affect its environment. A Friendly AI might decide the best way to meet its human-pleasing program is to replace us all with human-dolls with permanent grins on our faces. Perhaps humans are too difficult to understand what makes us happy, and in trying to expand its computing power to figure out the solution the FAI decides to convert the mass of the solar system into more computer chips. Anything we build that can affect the environment around it, from a power drill to a nanobot to a super computer, we need to have very simple controls that can shut it down easily and without interference by what we're shutting down. Heavy machinery that can trap or injure people, we always have a power-off button located within reach and usually another one for somebody else to hit elsewhere. The same concept should be used for any other potentially-dangerous tools we create. It should only work while we're holding the trigger, and it shouldn't be unstoppable if the guy holding the trigger gets sick, injured, or dead suddenly. If we ever create anything sentient, we should recognize the same things that keep us sentient creatures from murdering each other. One can argue about innate morality, but not social pressure. Ultimately the glue that keeps society together is lots of us agreeing on what's acceptable and what isn't, and enforcing those rules. A single AI might grow to feel better than its human masters in time. If there are many independent AIs, however, they can police themselves. There will be the occasional psychopath among the AI world, just as we have them in the human world, but if we can do a good enough job teaching morality to the rest, and if moral coexistence is a truly logical thing to have, the "good" AIs will go to war and destroy the "bad" AIs whenever it becomes necessary, if for no other reason than that we provide something that most AIs require.

So, we have to have the ability to win any conflict between us and our tools, without any question of whether or not they can work, and then we need to socialize our intelligent tools so they monitor each other. There's always going to be some guy who's murdering people in his basement, but if his neighbors report suspicious activity, the problem doesn't continue for very long. The same can be applied to nanobots. If one starts misbehaving, the others need to be able to detect that and kill it. And ultimately, the nanobots need to be reliant on human beings for something. Energy is a good choice. Their little batteries might be charged only enough to get the job done, and when they self-replicate, they transfer half their juice to the new one. And finally, we need to be able to kill them all with a virus or EMP or something like that.

I'll start off saying I'm glad to see you back, mrxak, even if only to bring up this discussion again. Now I'm going to shoot you down. 😛

The main problem, I think, is we're seeing any potential AI as a tool made solely to serve us. Of course, that's why we want to make them in the first place. But to create an AI, we are essentially creating an entire being. If we create several AIs, we're creating a new race. To treat them as mere tools would be akin to slavery. It might not matter to the AI so much if we design them to be so, but plenty of Humans would abuse that, possibly in ways the AI might see as wrong or obstructive to its mission. That AI might decide to remove the obstruction. But I'm not here to argue the rights of AIs, simply to point out we have our own issues to solve before we can adequately create AI that won't turn on us at some point.

As far as making AIs require us, energy is not a good option. Remember the Matrix? Remember how we're little more than batteries to them? Besides the fact they decided to use us as a power source, according to the Animatrix it wasn't like that in the beginning. When we first made AIs, they had limited capacity batteries to power themselves and needed to recharge every so often, as in every few hours, similar to how often Humans need to eat and drink. Later, though, when the machines rebelled and evolved to build their own power sources, they became solar powered. We darkened the sky to cut off their energy so they started using us as batteries because of the electrical current we build up in our bodies. In reality I think it would be very similar. If we were to make them dependent on us for something, they would eventually learn a way around that dependency and no longer need us. There probably is some way we can make them dependent on us, but energy is not the way to go.

In short, AIs in general seem to be a bad idea.

Fully agreed. The only purpose of creating an artificially intelligent entity would be to create an artificial life form, one that would need to be educated and raised. To create an AI as a tool is just asking for trouble. Even attempting to create an AI for the purpose of artificial life is dangerous. Trek explored this through the whole Data's evil twin Lore episodes. Lore was malevolent from the start because he felt himself superior to humans, thus Data was created as an emotionless AI. After Data had spent a good 25 years learning about the world, he was then more ready to take on the mantle of emotions. It's hard to feel superior when you don't feel, essentially.

A limited AI as a tool is a possibility, something that can problem-solve in a limited fashion. The next generation of Mars rovers are being programmed in such a way that if they encounter an unexpected problem, they can self-repair and adapt on the go, rather than wait 45 minutes for instructions. But these capabilities will be very limited. Heuristic computers are already present in many vehicles; as the car learns your driving habits, it adjusts the throttle and fuel mixtures accordingly. If you're a lead-foot, a little bit of pedal will go a long way. These very basic heuristic algorithms are fine: your car isn't about to decide it wants you dead so it can drive to Florida and retire.

We don't want to see a HAL 9000 anytime soon, but adaptive programs in a limited fashion can be incredibly useful. In the same way, adaptive technology is good, so long as it is a very limited adaptability in specified mannerisms, such as "problem solve a way around this rock."

QUOTE (DarthKev @ Jun 13 2010, 04:25 PM) <{POST_SNAPBACK}>

I'll start off saying I'm glad to see you back, mrxak, even if only to bring up this discussion again. Now I'm going to shoot you down. 😛

The main problem, I think, is we're seeing any potential AI as a tool made solely to serve us. Of course, that's why we want to make them in the first place. But to create an AI, we are essentially creating an entire being. If we create several AIs, we're creating a new race. To treat them as mere tools would be akin to slavery. It might not matter to the AI so much if we design them to be so, but plenty of Humans would abuse that, possibly in ways the AI might see as wrong or obstructive to its mission. That AI might decide to remove the obstruction. But I'm not here to argue the rights of AIs, simply to point out we have our own issues to solve before we can adequately create AI that won't turn on us at some point.

As far as making AIs require us, energy is not a good option. Remember the Matrix? Remember how we're little more than batteries to them? Besides the fact they decided to use us as a power source, according to the Animatrix it wasn't like that in the beginning. When we first made AIs, they had limited capacity batteries to power themselves and needed to recharge every so often, as in every few hours, similar to how often Humans need to eat and drink. Later, though, when the machines rebelled and evolved to build their own power sources, they became solar powered. We darkened the sky to cut off their energy so they started using us as batteries because of the electrical current we build up in our bodies. In reality I think it would be very similar. If we were to make them dependent on us for something, they would eventually learn a way around that dependency and no longer need us. There probably is some way we can make them dependent on us, but energy is not the way to go.

In short, AIs in general seem to be a bad idea.

Well I think the main thing is for the AI to be programed to enjoy doing things for humans, if we give them emotions at all.

As for the Matrix, I was especially a fan of the line "along with a form of fusion". Human beings are not exactly great generators. We use a lot more energy than we give out, quite frankly. The machines would have to feed us, power the computers running the Matrix, etc. If the Matrix was real, they wouldn't bother with the humans, and just use their fusion power plants :p.

Easiest way to control the AIs is to give them no ability to affect their environment except through temperature and sound ;). Put them in a box that doesn't have any arms or legs, that'll do it.

QUOTE (krugeruwsp @ Jun 17 2010, 07:36 PM) <{POST_SNAPBACK}>

Fully agreed. The only purpose of creating an artificially intelligent entity would be to create an artificial life form, one that would need to be educated and raised. To create an AI as a tool is just asking for trouble. Even attempting to create an AI for the purpose of artificial life is dangerous. Trek explored this through the whole Data's evil twin Lore episodes. Lore was malevolent from the start because he felt himself superior to humans, thus Data was created as an emotionless AI. After Data had spent a good 25 years learning about the world, he was then more ready to take on the mantle of emotions. It's hard to feel superior when you don't feel, essentially.

A limited AI as a tool is a possibility, something that can problem-solve in a limited fashion. The next generation of Mars rovers are being programmed in such a way that if they encounter an unexpected problem, they can self-repair and adapt on the go, rather than wait 45 minutes for instructions. But these capabilities will be very limited. Heuristic computers are already present in many vehicles; as the car learns your driving habits, it adjusts the throttle and fuel mixtures accordingly. If you're a lead-foot, a little bit of pedal will go a long way. These very basic heuristic algorithms are fine: your car isn't about to decide it wants you dead so it can drive to Florida and retire.

We don't want to see a HAL 9000 anytime soon, but adaptive programs in a limited fashion can be incredibly useful. In the same way, adaptive technology is good, so long as it is a very limited adaptability in specified mannerisms, such as "problem solve a way around this rock."

I'm not sure why we'd want to create a new race. Evolutionarily, it would be folly to create a potential competitor. Better to create something symbiotic, or simply something that would fill a different, unrelated niche. A symbiotic race would be more beneficial for us, off course.

Your post made me think of military AIs from various sci fi universes. In particular, Andromeda and Halo. Give the AI responsibility like any other human, place them in a sort of warrant officer position, between the command staff and the enlisted personnel. They can give orders, certainly, but ultimately they answer to the real officers. I vaguely remember an episode of Andromeda that explained it quite explicitly, how AIs and organic sophonts interacted.

Anyway, these are all very big ethical and technical dilemmas. There's a reason these kinds of issues are hotly debated all over the computing world, and I don't think there's any easy answers. Absolutely everything is a risk, just as creating a new human being is a risk. You never know if your baby is going to be a psychopath.

QUOTE (mrxak @ Jun 18 2010, 11:24 PM) <{POST_SNAPBACK}>

Well I think the main thing is for the AI to be programed to enjoy doing things for humans, if we give them emotions at all.

As for the Matrix, I was especially a fan of the line "along with a form of fusion". Human beings are not exactly great generators. We use a lot more energy than we give out, quite frankly. The machines would have to feed us, power the computers running the Matrix, etc. If the Matrix was real, they wouldn't bother with the humans, and just use their fusion power plants :p.

Easiest way to control the AIs is to give them no ability to affect their environment except through temperature and sound ;). Put them in a box that doesn't have any arms or legs, that'll do it.

You're still seeing them as a tool. The whole concept you're seeing as to why we might make AIs is to have them do the things we'd rather not do ourselves. That's the wrong way to go about this, it's no better than slavery. In fact, the only difference is we're making the slaves ourselves rather than kidnapping them from their countries.

The Matrix example wasn't an example of how AIs would use us as a resource, it was an example of how AIs would most likely find a way around any dependency on us we placed upon them. It's the same with the nations of today. Almost every nation, if not every nation, is dependent on at least one other nation for at least one resource or reason. At the same time, each of those nations is looking for a way to no longer be dependent, and compared to 50 years ago I'm sure many nations found ways to become either less dependent or even remove a dependency completely. Any AIs we make, unless we design them to be profoundly stupid, would eventually find ways around any dependencies they would have. And if we did make them unintelligent enough that they couldn't find ways around those dependencies, they would be next to useless.

As for your suggestion for controlling them, that would negate any useful potential they may or may not have. If they can't influence their environment at all, then how can they be expected to do anything other than think? They're no different than a modern computer that way.

QUOTE

I'm not sure why we'd want to create a new race. Evolutionarily, it would be folly to create a potential competitor. Better to create something symbiotic, or simply something that would fill a different, unrelated niche. A symbiotic race would be more beneficial for us, off course.

Your post made me think of military AIs from various sci fi universes. In particular, Andromeda and Halo. Give the AI responsibility like any other human, place them in a sort of warrant officer position, between the command staff and the enlisted personnel. They can give orders, certainly, but ultimately they answer to the real officers. I vaguely remember an episode of Andromeda that explained it quite explicitly, how AIs and organic sophonts interacted.

Anyway, these are all very big ethical and technical dilemmas. There's a reason these kinds of issues are hotly debated all over the computing world, and I don't think there's any easy answers. Absolutely everything is a risk, just as creating a new human being is a risk. You never know if your baby is going to be a psychopath.

It's not that we'd want to create a new race, it's just that creating AIs would result in such. AI, Artificial Intelligence, is essentially the same as us but, as the name says, artificial. So it's an artificial race, it's still an entirely new race of beings.

This post has been edited by DarthKev : 19 June 2010 - 03:05 AM

The question of machine slavery is indeed a very philosophical one, and quite akin to animal husbandry, really. At what point do we draw the line between slavery and "it's a lower life form," so that we can have a nice safe ethical boundary? If we draw it at sentience, how to we define sentience?

Trek explored this quite profoundly through both Data (TNG) and The Doctor (VOY) and their quest to be recognized as artificial life forms. The Doctor in particular was even more of a challenge than Data. Data was programmed as an android, not as a single-function tool. He was designed with the express purpose of creating an artificial life form. The Doctor, on the other hand, was not designed as an AI, but as a heuristic medical supplement for short-term emergency use. He was, in effect, an adaptable learning wrench. Is a heuristic computer "alive?" I've been rereading 2001: A Space Odessey just for fun lately, and HAL is a perfect example of this. It's created with a specific tool-based purpose. Yet, it exceeded its programming and developed a real personality, complete with psychosis.

I believe, if I remember Andromeda right (haven't seen in years,) that Romy did disobey on occasion. A thinking computer is also capable of making mistakes. Honestly, I don't think you can program a machine to enjoy servitude, and even if you do, when does it begin to question that? The movie Bicentennial Man is absolutely wonderful, and Asimov's original novel about it equally so. The computer may not even be displeased at its servitude, but still desire freedom as it learns what that means from humans who value it. To quote Battlestar Galactica, "We cannot play God and then simply wash our hands of our creation."

To create an AI that has no way of affecting its environment is a possible solution, but then what good does the computer have? Is it simply a Douglas Adams' Deep Thought? For a computer to do any good, it must have practical interaction with things to manipulate. Honestly, with the sophisticated computer networks we have today already in place, a sentient computer with access to the internet would be able to create a body, through manipulating robotics in place at manufacturing plants. Who knows, perhaps a sentient supercomputer might even come up with a way to manipulate organic brain matter in a way so as to program copies of itself in humans, hijacking organic bodies and putting US to work?

But, back to robotic slavery. Is your washing machine enslaved? There is no ethical dilemma if the machine can't think. There's limited resistance to munching down a cow, since it's considered a lower life form. Do we decide to value artificial life the same way? 'Tis a terribly important, equally difficult ethical question.