« Thinking Person's Guide to the Cloud Part 3a (a digression) | Main | The Ongoing Computer Revolution »

Comments

Clive Robinson

Gunnar,

The problem with policy is it is a set of incompleat guessed at rules running on multiple insecure levels, relying on correct implementation on unrelated and probably desparate systems.

I was aware of some of the issues to do with this many years ago having been involved with the design of secure communications equipment.

Back in the early 90's when teaching students I had to point out to them one of the pitfalls of security which is putting human understanding onto computers that have no understanding.

At the time I used to tell them,

A Computer has

R - rules
I - information
P - processes

A human understands,

I - integrity
C - communications
E - entities

That is the understanding of human to human interaction does not map conveniantly or even inconveniantly onto the basic principals of computer Operating Systems, it simply does not fit. And the rules (policy of today) would have to be all encompasing for all situations.

If you remember back then, the standard security model was based on Mainframes and Terminals and PC's did not have meaningfull security (16bit co-operative multitasking Windows was sitting on 16 bit single user DOS ontop of a BIOS bassed on a late 1970's 8bit OS called C/PM which had no notion of security).

Security then developed down one of two paths.

1, Apps + data on server.
2, Apps on PC data on server.

And "nair the twain met"

From this various models came fourth such as thin client and middleware and more recently web2.

However the two security models did not change and even today you can clearly see desktop security and server security mindsets in the minds of application developers.

It is a Castle mentality at the PC where it is assumed that the only procecess running are benign and under the control of a single entity. Thus you defend only against external attackers, not those within.

And a Prison mentality at the Server where it is assumed that each process runs in a jail where it is held and access to resources data and communications is strictly controled by the OS.

Thus each system is independent of each other and it's OS's view of the processess running and their access to resources are radicaly different as are the mind sets of not just the application developers but the OS developers as well (for the more common of paid for OS's on i386 and above hardware).

One problem with server side application development was that input checking/validation and error handeling was invariably moved as far to the left (input side) as possible and exception handeling was minimal (the data in the files is always valid and available ;). The reason for this was simplicity of design and an assumption that the whole application stoped if there was an exception (ie it either cored out or gave an error message prior to terminating) and cleanup was automatic...

The advent of the web bassed application saw a sea state change in the way things worked. Applications where quickly decoupled into "front ends" and "back ends". Unfortunatly no real attempt was made to change the design process at the time and often the server side was an existing app that just had the user front end ripped out and passed on to "web app" developers.

You ended up with the security and error checking in the front end (if at all) and the business logic in the back end. All of a sudden the prison mentality was in conflict with the castle mentality. With the result that things became horibly insecure. Due in the main to little or no input error checking in the backend, and no exception handeling in the front end, all seperated by an unreliable and often insecure communications system.

Worse the lack of basic controls at either end opened up the assumptions of the two mind sets like a handgrenade in an oil drum.

Part of the ad hoc solution was to put in place middle ware servers that took care of input errors and validation for the back end and exception handeling for the front end with more secure communication to the front end. However this has done nothing to solve the issues to do with the differing mind sets and also offers more vectors by which the overal system can be assulted (especialy with web servers using shared memory and privalege levels and allocating access to other resources to multiple clients that may have different privalages under different policies).

Worse still, at the client end the chosen application is a web browser that likewise allows multiple applications to run in the same memory space and relies not on the OS to mediate access to resources but the browser which coincidently runs at the same privalege level as the application (often on clients at an adminastrative privalage level).

Thus the prison mentality application segregation and data control has been effectivly obviated by the castle mentality at the client. And the castle mentality security at the client has been obviated by the prison mentality at the server mish mashed by the middleware.

It is this argument that has supposedly given rise to Google's Chrome browser and OS etc. The attempt is to provide continuance of the jail mentality into the client.

However even though you appear to have security parity and thus Peer-to-Peer security you do not.

There is the issue of extending conflicting policies onto the client PC etc thus data can leak from one secure application to the other simply due to trying to get "workability" between conflicting polices sent down from seperate servers (that is policies will sink to the point of acceptable business use, not security).

Then ontop of this there is the issue of covert channel communication. A savey developer of a server or middleware application will be able to tell a lot about what is going on on a client system via various techneiques using amongst other things standard and expected OS/Comms calls (ie an "enumeration" process).

For instance a DNS request made from the client at the behest of the middleware/server can reveal if the client has made a request to a given server within the local cache timeout simply by the response time (likewise via it's local DNS server etc). Thus the application developer can make a reasonable guess as to what other servers are being accessed by the client (and there are hundreds if not thousands of other tricks that can be used to get information including encryption keys).

If the client does not have an appropriate policy and OS support to stop this then there is no way the policy of another server can stop it happening except by exclusivity of client use which would break most business use models.

This issue then becomes two way in that with a cloud based service it may be in the cloud service owners interests to "enumerate" clients to see what other services are in use.

To prevent this sort of covert channel attack you need to "clock the inputs and clock the outputs" to remove timing information. This security mechanism however only works when the resources are only lightly loaded at best.

Thus "efficient systems" with high utilisation (load) will always have covert channels that can be exploited, and "Cloud computing" is only viable if it's resources are used efficiently...

All of which means that use of the cloud will always be insecure to malware unless exclusivity of resources is employed which breaks basic business models across the board...

Thus the question that should be asked is not how do I make the cloud secure (you cannot except by exclusivity) but how do I limit the bandwidth of any channels or limit the information that can leak.

That is I cannot reliably lock up the "container" (the Cloud) so I have to lock the data for a time period until it's value has minimised.

Which brings you around to the question of impact minimisation by other methods.

This is where the notion of parallel processing comes to mind. Data can be split up in many ways across many systems and the use of even simple encryption can help (obsfication) if adiquate data normalisation has been used.

A little fore thought on such issues would make the task of impact minimisation considerably easier, and should be a part of the data preperation phase of any "cloud project" to make it workable on atleast two seperate service providers systems.

As has often been noted "For a ha'pence of tar the ship was lost" or "P155 Poor Planning leads to P155 Poor Performance"

The comments to this entry are closed.