OnPoint: Ich bin ein Cyberpunk
94 Responses
First ←Older Page 1 2 3 4 Newer→ Last
-
yes remember we know that Microsoft has done deals with the NSA, they're compromised, and every Tuesday(well Wed here) they roll patches into your windows system, they can send you the generic NSA/GCSB backdoor, or target you with a specific one tailored just for you ....
And of course the big problem with back doors in general is that once they're installed they can be used by others, even if you trust the GCSB/NSA it doesn't mean that the Russian mafia wont come a knocking and sneak in and use it to empty your bank account.
-
Matthew Poole, in reply to
Not your general conspiracy theory, thanks very much. Never been a big fan of the whole "The NSA pwns Windows" crap, but there are some pretty serious issues with the way that MS is using UEFI to lock users out of their own computers.
-
Well yes - bear in mind that I'm not saying that MS does this, just we know that they've already done stuff for the NSA and that the NSA can happily make them do it should they want to and we'd probably never know - because those "security updates" make us more secure right? we need transparency in our personal infrastructure - there's little point in end to end crypto if one of the ends is owned by a 3rd party.
And yes UEFI just makes it worse, because of the way that they are forcing it on Arm based platforms MS are effectively locking out alternatives (Linux based ones in particular) that can be open and transparent and provably safe
-
Keith Ng, in reply to
umm, not really. It means the request has a classification level of Secret, that its subject relates to communications intelligence (you got that bit right), and that it may be released to the members of Five Eyes.
My "it means that.." statement was supposed to be a summary of the header in the context of all the facts that have come before it, not a straight translation of what the header meant.
-
nzlemming, in reply to
Provided you assembled the compiler from scratch and verified the source code of the tools you’re using the build the keys, of course :P
Before or after you change your name to Richard Stallman? ;-)
-
Rich of Observationz, in reply to
Have a classic paper on that subject.
-
Matthew Poole, in reply to
Couldn't follow the code (I'm not a programmer) but the explanation is pretty clear. Certainly a good lesson, and interesting that the USAF was talking publicly in 1974 about the risk posed by not having total control of every level of the software build stack.
-
BenWilson, in reply to
Yes, every level. Considering most hardware is built by software nowadays, you can't draw the line at hardware, even. It could have Trojans built into it. And that's even harder to detect than bugs in code. But there's a blurring line here with the entire Cartesian method of doubt. You can't know for sure that you aren't, in fact, just powering the Matrix. At which point one can only argue one's way out from the cogito by appeal to non-logical, but nonetheless reasonable, arguments.
-
-
At least as recently as the early 90s the (UK) military was loathe to deploy any kind of OS (which in those days was usually VMS), at least on things that were meant to fly or go bang. They liked to be able to examine each byte of the code and gate of the hardware (resulting, on one memorable system, in a number of drawings being provided with hex dumps of ROM contents).
-
Matthew Poole, in reply to
I'm told (reliably) that there's a Cisco factory in Mexico which exists for the sole purpose of allowing people who know the correct item code to order network hardware which has been manufactured right from the silicon level in premises that are acceptable to the most paranoid arms of the paranoid US government.
-
The counterbalancing force is that it becomes harder and harder to maintain a deception, the more people who are involved. It's extremely unlikely that Trojans would go unnoticed by every single person involved all the way down a chain of manufacture, starting in hardware. Something hiding in a compiler is not going to be missed by someone who likes to tweak the assembly code output. Something hiding in an assembler is likely to be noticed by anyone scanning the output, since assembly language code has a much more linear relationship in output binary size to input code size than higher level languages. A sneaky piece of logic out of place on a chip will be noticed by anyone trying to reduce the cost of the chip by reducing its size.
Despite making fun conspirational fiction, computer crime is really hard to get away with because the culprits are usually obvious, and almost no one understands every little auditing system out there, every way in which they could be spotted, or caught after the fact. The people who do understand are usually well paid enough not to want to shit in their own nest.
-
The only really secure way to do this would appear to be to physically copy emails to some kind of storage media...and decrypt them on a second system that isn't EVER connected to any network of any kind. Then write the replies and encrypt them on this offline system....and then convey the replies back to the online system and send them. Your private key is never on any PC connected to any network. Unfortunately, this mean private keys can only ever be physically transported.....and we saw what happened to Glen Greenwald's partner when they tried that: He was searched and everything electronic was confiscated.
Privacy is illegal.
-
I just saw this. If the NSA have ensured there are weaknesses in encryption specs / standards, how useful is PGP encryption?
Also.... "SELinux" came from the NSA. Everything based on it can only be hopelessly compromised.
-
Rich of Observationz, in reply to
SELinux is open source. That means that many eyes outside the NSA will have inspected it to see if there are any holes. And given that what it does (annoy people trying to admin a system, mostly) is fairly obvious, that should be quite straightforward.
OTOH, if what you mean is that NSA perpetrated the idea that fine-grained access controls are useful as a way to lull people into thinking that because their servers are annoying and awkward to administer, they must be difficult to hack, then maybe I'd agree.
-
Stephen R, in reply to
The only really secure way to do this would appear to be to physically copy emails to some kind of storage media...and decrypt them on a second system that isn't EVER connected to any network of any kind.
As someone pointed out to me today, the Stuxnet worm that was targeted against the Iranian nuclear centrifuges worked across multiple computers and could use usb sticks as infection vectors. The computers that were the targets were air-gapped, and it required transmission via usb at least for some jumps in order to be infected.
It is not beyond the realms of possibility that if the NSA were really targetting an individual, they could send a targeted piece of software that could migrate via USB, look for their desired information, copy that to the USB, and then send it home when the USB drive gets plugged back into the computer with internet connectivity.
It is less likely that they could do that on a mass scale, but I guess it's possible.
-
Matthew Poole, in reply to
SELinux is open source. That means that many eyes outside the NSA will have inspected it to see if there are any holes.
Particularly since the NSA is not widely trusted by the Linux development community. That SELinux was an NSA project is just a guarantee that it has been vetted more thoroughly than most other parts of the kernel.
-
Matthew Poole, in reply to
The classic way of social engineering a worm into a secured environment is to drop an infected USB key or few in the parking lot of the target organisation. Infect a computer that's inside the border protections and then let the worm do its thing. Getting information out is a lot harder, particularly if it's a classified information environment which has been done properly (no flash drives, etc), but for a worm which is meant to be a one-way destructive infection it's very much easier.
And if you're truly paranoid about how you get information from one system to another you use write-once optical media because it enforces the air gap by never allowing uncontrolled writes back. You also disable auto-run, which shuts down most removable media infections. Optical media is also more obedient of things like not allowing auto-run, unlike flash drives.
-
Ian Dalziel, in reply to
Chipping away...
the Stuxnet worm...
...and Flame and DuQu and other spy/mal-ware...
...one wonders just how far those have spread, since being released into the wild by their indolent and arrogant creators?
Post your response…
This topic is closed.