Originally Posted by HeinKill
Originally Posted by Ssnake
...as long as your database is reasonably up to date. But even streaming the raw sensor data would take up a lot of bandwidth. Only when you let it be filtered by what the onboard AI has identified you can reduce the data stream significantly. But then you're inside of the filter bubble that the programmers of the AI created, and we all know that programmers never make mistakes. wink
Still it might be useful. But that would also mean no real-time raw imagery on recce flights. Maybe it's stored in the drone for post-flight evaluation and you only get to see abstracted tracks for identified objects. But that either means a super-super-super sophisticated image recognition software for the AI, or active radar emission. If you want to make emission control a part of the story, I'd go with ultraviolet, optical & thermal image recognition. That's passive, but will reduce the drones' combat worthiness in bad weather conditions (well, you could still switch on the radar when everybody else has to, too). But whetever target can successfully fake its signature/fool the AI/isn't in the database will either be invisible or "unknown" to the pilot in real-time.


Can we assume some sort of data compression / bandwidth ‘Moore’s law’ here without it being Disney tech? The scenario is set in the 2030s.


Your call. I'm just talking about what we know today about it.
a. There are mathematical laws behind data compression that simply force a disproportionate growth of computing power the more you want to compress. With more computing demand comes more latency (and energy consumption), and, if you want to save on bandwidth, loss of information (IOW, at some point the high res imagery becomes pointless if compression and compression artifacts reduce the actual resolution of the image to a point where old PAL/NTSC resolution largely delivers the same quality. Also, certain mathematical principles about information entropy prevent going beyond certain compression levels even if you ignore the diminishing returns.

b. my optimistic assumption would be that with some magical new compression technique you can stream a digital 8K video with today's BluRay bandwidth requirements, which are at least 6MBit/sec (most likely it'll actually be 12...15MBit/s). So, that's your bandwidth requirement. If you would transmit only identified tracks filtered by the AI you might get away with just a few KByte/s - which is still a lot for ELINT guys (but two or three orders of magnitude less so); the problematic part is the continuous transmission mode. Even if you scramble the transmission over many frequencies, or use a wide band spread spectrum transmission with reduced intensity (but the weaker the signal, the easier it is to jam) you'd still tell anyone listening on the right frequencies where you are and what your flight vector is (Doppler shift).


To be honest, I am highly skeptical about the viability of UCAVs for these specific reasons. Piloted jets fly autonomous (they have a human brain or two for data processing), therefore they can fly largely without emitting anything - which is why (active) radar is neded to find them in the first place, which you can then counter either by electronic warfare and/or terrain masking. UCAVs would also have to fly largely autonomous under emission control conditions, which makes the whole thing much more difficult if the robot isn't just to keep the thing in the air and follow a previously planned flight pattern but rather act in contested airspace, possibly even autonomously dodging missiles while relying on passive image analysis. We're talking about a near-sentient dogfight AI, at which point the question arises why you'd still need a human pilot, even if he controls an entire swarm.

Yes, we already have robotic airplanes since about 10, 15 years now (Predator, GlobalHawk,...), so the concept is viable --- under conditions of air dominance, and if you limit the video feed to standard TV resolutions, unencrypted, via satellite link (which comes with significant latencies of course). That's okay if you're fighting with cave men. As soon as you're in contested air space with a competent enemy, current drones are hopelessly outclassed by classic air defense and fighter jets. So I agree, to make UCAVs viable you need low latency control, which allows you to fly more aggressive maneuvers and pull more Gs. But that comes at the cost of constant radio emission, so the obvious question is "why don't we just go after the control node rather than the drone?"
Fully autonomous flying kill bots may be viable by 2030 (I hope not), but from a storytelling point of view they are either boring, or must be the villain side (by definition you can't empathize with robots).


In short, I have no way out for you. We need to gloss it over/bank on the reader's willing suspension of disbelief. smile