On Relative Sizes of Spacecraft

@joshtigerheart, on Jul 12 2008, 08:26 PM, said in On Relative Sizes of Spacecraft:

Like I said though, a closed-network on your capital ship isn't likely, as you're going to want to be sending tactical data between your other ships, stations, command, and others. Radio would be too slow and very open for the enemy to hear.

You're assuming FTL comm's are possible and reliable here, and ignoring that anything transmitted would be encrypted, even if over radio.

Quote

And with advanced sci-fi computer processors and software, they'd be able to crack your wind talking with little effort just by using their computer to figure out a few words and letters.

What is this, cyphers and codewords? Your analogies don't hold up against real encryption methods.

If I were absolutely paranoid about encryption, I could use a variant on One-Time Cypher Pads for open, shared data. A series of random numbers, known beforehand to every ship in a task force, that are added to the values of each character in a message. Without the pads, the messages cannot be de-scrambled, and are thus unreadable garbage. All messages are prefixed by the position on a pad that the message is encoded with, and the identity of the pad, if multiple pads are expected to be used.

When the task force nears the end of the pad, the last thing transmitted is the (encrypted) identity of a randomly chosen visible star in a star catalog, whose spectral data will serve as the seed for a new cypher pad, as generated by an equation known to the ships in the task force before hand.

No part of the cypher pad will ever be reused.

The cypher pads themselves are never openly transmitted, as they are generated on each ship separately, using coded data as seeds in a 'randomizer' equation known to each of the ships.

Since enemy listeners never have the pad, the encrypted messages are useless garbage to them. They also cannot 'spoof' commands, or drop viruses, into the system, as anything they try to transmit will lack the proper encryption, and be ignored.

Granted, why it's not like hacking a home computer. It would require advances in technology and in other areas for this to be even possible and likely wouldn't be feasible in early sci-fi (few hundred years from now at most), or perhaps even middle sci-fi (few thousand years from now at most), both assuming my time guestimations for both classifications are correct, and assuming such classifications do exist. It's about as theoretically feasible as FTL travel. Kind of a "we think it's possible, but we aren't sure how and would need tons of research to get near it" deal. So, yes, with modern technologies, my idea is absolutely ludicrous. However, if it is possible to develop software that could access systems that are supposed to be closed and locked to outsiders, (and given crazy computer-related advances such as the internet and bluetooth that would look ludicrous if you came up with them, say thirty years ago, I'd say it is) then something like this would be a possibility eventually.

I could even be incorrect on the methodology a bit. As opposed to simple wireless hacking, it could require a bit more involvement such as launching small capsules containing computer chips, tiny bots, or something at your target, having those infiltrate and connect to the system and then give access through them. I mean, if we could have A.I. controlled fighters, why couldn't we have a tiny, A.I./remote hacker? I remember reading three or four years ago, and seeing it on Discovery (or was it TLC?) that there were already small, fly-like bots equipped with cameras in development. As trivia, the wings were the part that was giving the engineers difficulty, not making a tiny camera bot in the first place.

Edit: Eugine posted while I was typing mine.

@eugene-chin, on Jul 12 2008, 09:35 PM, said in On Relative Sizes of Spacecraft:

<snip>

That would be an excellent defense. However, it has two fundamental flaws.

The first is rather simple: It exists. Today. By stating that this system would be the perfect anti-hacker system in hundreds or thousands of years from now. That'd be like saying that a castle would be an impregnable defense in modern times. But if ancient castle builders had a nuked fired at them, it'd be pretty obviously they'd be royally screwed. The same can be applied to any modern security system, computer or otherwise. What we invent in hundreds to thousands of years in the future will royally screw whatever we have today if it ever tried to get through it. Sooner or later, your metaphorical castle will meet the metaphorical gunpowder (let's not even consider a nuke equivalent) and become obsolete. Just like the original castle builders couldn't conceive something like a cannon taking down their castles, let alone existing, we can't conceive what crazy things we'll have centuries or so in the future.

Second: we know it exists. Wait, what? This has a chance to be wrong, but I'm confident I'm not. But there mere fact we know this exists must mean that some government and/or military very likely has something better that is still classified. What do they have? I'll have to wait however many years it is until it is declassified. But I'm confident there's something(s) better than your system that only those with proper clearances know about. And enough crazy things, such as the rifle that has an attachment that shoots grenades that can be programmed, by the shooter on the rifle itself, to explode at a set distance measured by their scope, are declassified from time to time to suggest that better is a very strong possibility.

Remember, the time when you think you are perfectly safe is when you're in the most danger. No lock is pick-proof, even if the method for picking it doesn't involve traditional lock picks, or picks at all. Otherwise we could use firewalls developed ten years ago and never have to worry about hacks or viruses.

This post has been edited by JoshTigerheart : 12 July 2008 - 10:59 PM

First just FYI, your using the wrong word. It isn't Hacker, it's Cracker. There is a difference. A hacker breaks in but causes no serious harm. A cracker breaks in with the intent of causing serious harm.

Second, I don't think an A.I. ship would be sent out and allow itself to be cracked. I would think the designers of the craft wouldn't just slap in an AI and assume everything was good to go. If they were not dealing with an optical computer system, they might make sure that there AI was hardened. In other words, they would be thinking. Out on the battlefield the space fighter would be attacked by all sides and since it is a computer fighter it would be attacked internally too. Unless you have money to burn, you would protect those AIs.

One way might be to install a secondary AI that sits there waiting for an attack from an external source; a cracker. Its job would be to stop this attack and if need be send its own attack to for stall any other attacks. Another thing should be said. The computer is only as smart as its maker. If the cracker is some sort of device that attaches itself to the ship and the internal AI 'Antivirus' wasn't coded for such a thing then that ship is SOL

One last thing, I seem to remember reading somewhere about a computer being coded to be "Creative". I just can't remember where.

@joshtigerheart, on Jul 12 2008, 10:45 PM, said in On Relative Sizes of Spacecraft:

That would be an excellent defense. However, it has two fundamental flaws.

The first is rather simple: It exists. Today. By stating that this system would be the perfect anti-hacker system in hundreds or thousands of years from now.

Bull.

One-Time Cypher Pads are never reused, so there's no patterns to pick up on. Saying "Magic Future Technology" doesn't change the fundamentally sound, mathematical foundation this is built upon.

If properly used, a one time cypher pad cannot be cracked; the users would have to be doing something stupid to allow it (Reusing pads). This has been proved mathematically, see: Wikipedia: One-time pad.

The one-time pad method is so secure that, properly used, it could even survive an affirmative solution to the P vs NP equations. For reference, if the P vs NP equations have an affirmative solution, All Other Encryption Systems Known To Man Will Be Obsoleted Overnight.

You quote castles and gunpowder, but this is Math. Sir. Issac Newton may have lived three centuries ago, and wouldn't recognize airplanes and computers, but that wouldn't stop him from understanding modern Calculus.

Quote

Second: we know it exists. Wait, what? This has a chance to be wrong, but I'm confident I'm not. But there mere fact we know this exists must mean that some government and/or military very likely has something better that is still classified.

You're wrong. This is Math. See above.

Quote

Remember, the time when you think you are perfectly safe is when you're in the most danger. No lock is pick-proof, even if the method for picking it doesn't involve traditional lock picks, or picks at all. Otherwise we could use firewalls developed ten years ago and never have to worry about hacks or viruses.

Previous statements about Windows being "Open" to outsiders by Design betray the fallacy of this. When you download updates to your security programs, you're not downloading any new methodologies; most of the time, it's simply a list of virus markers for your security to look for.

You haven't once proposed an actual method for cracking this, or any, proposed system of defense. Instead, what you've said boils down to "Hackers are Magic" or "Future Technology Magic" again and again. You may as well be quoting psychic powers.

Well, y'know what? By the time there are methods for hacking / cracking This, (If there ever are ) there will Also be methods for Defending against such attacks.

You know why?

Because if a hacker can find a weakness in a current security system, he can make a fortune by patenting, and selling, methods to Defend against this self-same attack.

JTH, you misread my original point. I am not advocating voice transmission for all data. I am advocating a closed loop system in that all major important functions of the ship cannot be controlled from the outside. A house is only insecure when there is a door. I am advocating not building a door into the system, and instead shouting instructions through the walls of the house which must be interpreted by an internal worker/computer/etc before execution. There will still be AWACS/recon/command(do this, attack this, etc) data transmitted between ships, but the system will be designed in such a way that at worst if hacked, the ship may receive inaccurate targeting/recon data, which will be mostly irrelevant if the ship is designed to be able to operate without AWACS help, as it should be. It will be impossible to send a "self-destruct" or an "attack a friendly" signal, as these commands will be permanently ruled out by whoever/whatever it is which recieves and process the transmitted data. Combined with humans on board, mechanical(non-computerized)-where-possible systems, and good encryption practices such as Eugene Chin's, hacking a ship to kill it will be futile.

As I said before, if you can board a ship to hack it, why just stop at hacking it? Why not also plant a bomb, poison the air, send a squad of soldiers?

This is why I love these boards and this game. Where else could you have a debate about the feasibility of hacking into hypothetical computer systems on non-existent ships in an imaginary universe within a game?

I see your Wikipedia link, and I point you farther down the page to http://en.wikipedia....me_pad#Exploits. If it can be exploited, it cannot be perfect. This system apparently relies on something to be perfect. Humans are inherently imperfect beings with no chance of reaching perfection, so therefore cannot produce perfection. Someone will make a mistake and it will compromise your "perfect" security. Not to mention, all someone would have to do flood the thing with random sequences fitting in whatever limited range the pad uses until one works and the lock click opens. By modern technologies, this would take so long, saying it is not feasible is a pretty massive understatement. However, as processing power and speed increases over the span of hundreds or even thousands of years, and considering both increase at an exponential rate, eventually you'll be able to generate a countless number of sequences in mere seconds, one of which will eventually work. It might be lucky and happen almost instantly. It may take the intruder a very long time.

And that assumes a spy doesn't infiltrate the systems at a base or somewhere prior to the battle, accesses the system, and screw things up and/or steal sensitive security information giving the battle hackers an unprecedented edge. This also assumes that the would-be hackers don't salvage one or more of your computers from a disabled ship, pick apart your operating system, the hardware, the software, the security, methods it uses for connection, etc, and then modify their own systems and software so, come next battle, the hackers are able to use software capable of spoofing the systems, using replicas of your hardware and software, or just using the stuff they pilfered from your wrecked ships. The only way you can stop that latter from being a possibility is to never suffer losses in any battle. Ever. And we all know that's not going to happen.

And this also assumes someone doesn't discover some way to simply bypass and ignore some or all of your security systems. Why knock down the brick wall when you can climb over it? That can come in all sorts of manners. Say if in Nova the Federation discovers that they can use Vell-os slaves' telepathic abilities to access a computer in an unorthodox manner that the target system wouldn't even recognize as a connection, or even register. Or if someone invents some sort of new signal (such as a sort of Blue Tooth or Wifi x.0) that can interface with older machines, but the older machines don't acknowledge, let alone recognize?

And this is still assuming you didn't do something like fire a pod containing hacker nanobots at your opponent so they could directly infiltrate the enemy systems and either do the hacker there or give the human or A.I. hackers a stable, secure connection that bypasses the target's defenses, as if they put in the administrator password correctly. Or a boarding party intent on commandeering the ship attempts to break into the systems from the command ship itself? After all, you're going to want some techies to handle the computers on a ship you want to capture, since your typical grunt isn't likely to be trained in that capacity.

This is probably how a digital warefare in space scenario would go down.

Group A and Group B meet in battle. Hackers attack each other, whether through wireless connections, hacker nanobots infiltrated into the enemy ship, or whatever. Both sides gain some understanding of of each others hacking capabilities and security systems, but odds of anything successful are pretty low.

Group B looses, Group A manages to salvage still working computer systems from Group B ships.

Group A sends the machines to their techies to study the machines and their contents with great detail.

Espionage can substitute above steps as well.

Group A's hackers receive information from the techie's findings relevant to them. They study this information and either use these captured systems, copies of these systems, or make modifications to their own to deal with the configurations, perhaps writing some new software in the process.

Group A and Group B meet in battle again some time later. Group A uses their knowledge of Group B's systems to defeat or bypass security and break into the systems at alarming rates, causing much damage and serious disruption in the battle.

Group B diligently collects data on the failures and sends it to their techies. They look at it, make changes to the software code and advise what changes need to be made to security protocol yet again. Also, relevant information, likely much less but still useful, is send to their hackers.

They meet in battle yet again. Group A's methods are quickly thwarted and likely accomplishes little, while Group B makes much more progress, perhaps even successfully completing several hacks since they have the edge. Especially if Group A is using their hardware and software or replicas.

And so it goes. Not every scenario would go like this, as every successful espionage mission or recovery of an enemy mainframe will be such a big hit.

And Vast brings up some interesting points (though white hat hackers are those who break in with no intent to cause damage, and black hats being the crackers. Cracking is also often a part of hacking, but can also be a separate thing). The enemy isn't going to be like "Oh, we're being hacked. Our firewall should stop them." It's going to be digital warefare, with the hackers and A.I.s on both sides attacking, defending, and using various strategies to thwart each other. While the fighting outside is going on, there'll also be one over the computers that'll be just as fierce.

Quote

This is why I love these boards and this game. Where else could you have a debate about the feasibility of hacking into hypothetical computer systems on non-existent ships in an imaginary universe within a game?

:laugh: Indeed! I'm not proposing anything here that's any less feasible than space combat.

The issue is not making the system "unhackable", its minimizing damage, which is easily possible. Bruteforce message decryption after the message has been intercepted is possible. The feasability of hacking into a ship to control it is irrelevant, as it depends on how open the ship's designers leave it. If the ship is essentially all-mechanical (see: BSG), then hacking is purposeless. However, if everything is controlled by a central AI (such as Andromeda), then hacking is quite useful. Wirelessley networking your ship's computers and giving them automated control over major ship functions (life support, weapons, engines, etc) is stupid unless the ship itself is completely computer-controlled.

Of course, as already stated, the first side to develop telepathy wins.

Alright, we need a new topic of debate.

http://en.wikipedia....Lockheed_AC-130

The Specture gunship currently fires a 105mm howitzer. So a big gun on a flying machine isn't completely out of the question.

The feasibility of AI depends on what happens between now and whatever universe you are trying to create.

The current trend is towards heavy AI implementation. If quantum computing pulls through computers can become basically invulnerable to hacking (cracking 128bit encryption is hard enough). A lot of systems would be handled by the AI, though current preference is to have a human pull the "trigger" when it comes to ending life, though humans are quite slow at reaction.

Though there are several events that could change that. Depending on the success of EMP shielding, electronics may have to be reduced so that the ship can still function to a degree if a large portion of its electronics are destroyed by EMP. There may be an aversion to sentient AI (matrix/BSG).

@elguapo7, on Jul 13 2008, 01:32 PM, said in On Relative Sizes of Spacecraft:

http://en.wikipedia....Lockheed_AC-130

The Specture gunship currently fires a 105mm howitzer. So a big gun on a flying machine isn't completely out of the question.

It would be interesting to see a railgun mounted on an AC-130.

That plane is so damn versatile.

@joshtigerheart, on Jul 13 2008, 10:59 AM, said in On Relative Sizes of Spacecraft:

I see your Wikipedia link, and I point you farther down the page to http://en.wikipedia....me_pad#Exploits. If it can be exploited, it cannot be perfect. This system apparently relies on something to be perfect. Humans are inherently imperfect beings with no chance of reaching perfection, so therefore cannot produce perfection. Someone will make a mistake and it will compromise your "perfect" security. Not to mention, all someone would have to do flood the thing with random sequences fitting in whatever limited range the pad uses until one works and the lock click opens. By modern technologies, this would take so long, saying it is not feasible is a pretty massive understatement. However, as processing power and speed increases over the span of hundreds or even thousands of years, and considering both increase at an exponential rate, eventually you'll be able to generate a countless number of sequences in mere seconds, one of which will eventually work. It might be lucky and happen almost instantly. It may take the intruder a very long time.

And that assumes a spy doesn't infiltrate the systems at a base or somewhere prior to the battle, accesses the system, and screw things up and/or steal sensitive security information giving the battle hackers an unprecedented edge. This also assumes that the would-be hackers don't salvage one or more of your computers from a disabled ship, pick apart your operating system, the hardware, the software, the security, methods it uses for connection, etc, and then modify their own systems and software so, come next battle, the hackers are able to use software capable of spoofing the systems, using replicas of your hardware and software, or just using the stuff they pilfered from your wrecked ships. The only way you can stop that latter from being a possibility is to never suffer losses in any battle. Ever. And we all know that's not going to happen.

:laugh: Indeed! I'm not proposing anything here that's any less feasible than space combat.

So at this point I think most of us are spinning our wheels (I just skimmed over this as I just got back from a hiking trip this weekend).

Let me remind you Josh that as the same time that a computer is breaking and encryption, a computer is also building that encryption. The point is that the limitations on how large a number can get are actually the bit sizes (thus x amount of characters can be supported). You can have number like 6^55 right now...a very large number, but with future computers you could have 6^55^55^55^55 (10^100 is a google, 10^(10^100) is a googolplex for comparing).

My point was that this mistake was made earlier, whatever your enemies have, you have as well. So, being an Applied Mathematics major, my sense tells me that a number growing exponentially, used for encryption, is always going to be much harder to factor, and it is going to become even more difficult to factor as it grows exponential. Thus the lead goes to the encryption, not to the factoring with a faster computer.


On the topic of espionage. the human element in war is vital. Understand that the fundamental flaw of computers and A.I. are they are in fact programmed and understood by human. Thus, there is almost always someway to reverse engineer the system to find out what makes it tick. So far we have not been able to create a computer or a life force, that thinks for itself. That is the point of humans. A human can make a caprices, nonsensical decision. Literally, a human can make a choice that has been universally agreed to be bad. This is the reason why humans are inefficient, computers can rule out certain choices. They can run a single process millions of times without having to focus on other things. A human, however inefficient, can consider many options and things that are not considered by any other human. It is impossible to contemplate what goes on in another humans mind, become it comes from an accumulation of data that is equal to or greater than all that person has experienced, and how all that life has been experienced, or even how they chose to perceive the experience. Basically, humans are a random number generator. There is no formula to humans and that is why the human race, with no matter how complicated A.I. or robots get, will never be extinct.

Put it this way, it computers ruled the universe there would be on central computer, there would have to be. Some central command that created, or managed all the other computers/robots etc... This is the most efficient way, but it is also the most dangerous. If you subscribe to the theory of evolution than you will note that differences, not similarities are the key to survival, having the ability to be set apart creates breaks in the chain of existence. This is the key to protection against disease, predictors, and natural disasters. You could put a virus in the central computer that could destroy the whole civilization, this would be a near impossibility with humans. If you try to create a disease that destroys the whole human race, you would have to ensure it could overcome all genetic, hereditary, and physiological barriers (aka natural immunity). Then you would have to ensure that it could be delivered to all 6 billion people (disease actually burn themselves out often, much like a fire that has run out of forest or in to a corner). It is very unlikely such a disease could be created or even exist. People are just to spread out, some are isolated, and there probably exists some with immunity or some that would develop immunity, or some that would just be carriers, but never get the disease.

What can I say, the diversity of the human race, seems to be the thing that keep it alive, and although we may try to "purify" the race by going to sleek, speedy, machines/computers/robots, it is that inefficiency that keeps us alive.

Anyhow, we are ready for a new topic.

Jeperdy

New catagories are:

Egg Lovers Delights!

Where have all the dodos gone?

The Joys of Sheep shearing!

Astronomical Anomolies

Speej (jking Pikeman shot this down elsewhere 😉 )
and...

Britney Spears (all flaming and insults are fair game)

So uh, did anybody actually discuss my idea of a logarithmic scale for ship graphics? 😛

For about... five posts?

I do like the direction the topic went though. That's why I make these topics, to get a good discussion going.