1 Raindrop

Gunnar Peterson's loosely coupled thoughts on distributed systems, security, and software that runs on them.

Recent Posts

  • Security Champions Guide to Web Application Security
  • Security > 140 Conversation with Pamela Dingle on Identity
  • 6 Things I Learned from Robert Garigue
  • The Curious Case of API Security
  • Security Capability Engineering
  • Ought implies can
  • Security > 140 Chat with T. Rob Wyatt on MQ and Middleware Security
  • Privilege User Management Bubble?
  • The part where security products solve the problem
  • Four Often Overlooked Factors to Give Your Security Team a Fighting Chance

Blogroll

  • Adding Simplicity - An Engineering Mantra
  • Adventures of an Eternal Optimist
  • Andy Steingruebl
  • Andy Thurai
  • Anton Chuvakin
  • Beyond the Beyond
  • cat slave diary
  • Ceci n'est pas un Bob
  • ConnectID
  • Cryptosmith
  • Emergent Chaos: Musings from Adam Shostack on security, privacy, and economics
  • Enterprise Integration Patterns: Gregor's Ramblings
  • Financial Cryptography
  • infosec daily: blogs
  • Jack Daniel
  • James Kobielus
  • James McGovern
  • John Hagel
  • Justice League [Cigital]
  • Kim Cameron's Identity Weblog
  • Krypted - Charles Edge's Notes from the Field
  • Lenny Zeltser
  • Light Blue Touchpaper
  • Mark O'Neill
  • Off by On
  • ongoing
  • Patrick Harding
  • Perilocity
  • Pushing String
  • Rational Survivability
  • rdist: setuid just for you
  • RedMonk
  • RiskAnalys.is
  • Rudy Rucker
  • Software For All Seasons
  • Spire Security Viewpoint
  • TaoSecurity
  • The New School of Information Security
  • Windley's Technometria
  • zenpundit
Blog powered by Typepad

Security > 140 Conversation with Gerry Gebel Part 3 A Visit from Madame TOCTOU

Here is part 3 of a conversation (part 1, part 2) with Gerry Gebel and his colleagues David Brossard and Pablo Giambiagi, we discuss fine grained authorization, XACML patterns, what an enterprise needs to think about to build these skillsets in house to make the most use of these standards

GP: In our last conversation you mentioned "As with other access policies, administrators will work with application and security experts to construct XACML policies. Administrators learn this skill fairly quickly, but they do need some training and guidance at first."

In your experience who authors these policies today? And where should this responsibility sit in the organization going forward? It seems there is a mix of application domain and security experience required, plus there may be some need to understand business processes and architecture. Is there a new organizational role, Security Policy manager emerging?

GG: I like the sound of Security Policy Manager, and it is a role that could appear over time. For now, we see security administrators working with a business analyst and/or application developer to construct the policies. This amounts to determining what resources/actions need to be protected, translating the business policy (doctors can only view records of patients they have a care relation with), determining what attributes are needed, sourcing the necessary attributes, etc.

GP: In looking at Infosec as a whole, it seems to me that we have two working mechanisms - Access Control and Crypto. Everything else is integration. What guide posts should people use to identify and locate the integration points for their authorization logic? Do you focus on the resource object, the user, or the Use Case context? Or is it a mix?

GG: Policies can focus on the resource, subject, action or environmental attributes - that is the beauty of an attribute based access control language like XACML. That also means there are many ways to model policies, here are some general guidelines:

- Make sure you start with the plain old english language business rules and policies

- Take a "divide and conquer" approach to building the policy structure. This could mean writing policies for different components of the application that can be viewed as logically separate. What you're also doing here is taking into account the natural activity flows when people use an application or the types of requests that will come into the PDP. Upon analysis of the scenarios, you can place infrequently used policies in branches where they won't impact performance of more frequently used policies. 

- Evaluate frequently used policies first - seems intuitive but may not be apparent at first. Therefore, you need to continually evaluate how the system is being used to see if modifications to the policy structure are needed. As mentioned in the previous point, this will allow you to identify policies that are infrequently evaluated so you can ensure they are not in the path of frequently evaluated policies. 

- Consider the sources of attributes that you will utilize in policies. Are the attributes readily available or spread across multiple repositories - great place to use a virtual directory if answer is yes.

It is only after analyzing the scenario that you can say whether policies will be targeted to users, resources or some mix of the two. This is an area where we do provide guidance to customers at the start of a deployment. However, we do find that customers are able to manage this aspect of the deployment pretty quickly. Another point to keep in mind is that access policies don't change that frequently once an application is on-boarded into the system. What happens on a daily basis is the administration of attributes on the individual users that will be accessing the protected applications. 

GP:The Time to Check Time of Use problem has been around for as long as distributed computing. The time between when tickets, tokens, assertions, claims, and/or cookies are created until the time(s) when they are used by the resource's access control layer, gives an attacker a number of places to hide. Choosing the place to deal with this problem and crafting policies is an important engineering decision --  What new options and patterns does XACML bring to the table to help architects and developers deal with this problem?

You talked previously about the XACML Anywhere Architecture where a callback from a Cloud Provider queries PDP; this would enable the Cloud Provider to get the freshest attributes at runtime while still allowing the Cloud Consumer to benefit from the Cloud provider's deployment. The callback idea has appeal to a wide variety of use cases, but do people need to update their attribute schemas with any additional data to mark the freshness of the attributes so the PEP/PDP can decide when or if to fire off these requests? Does the backed PDP have any responsibility to mark attributes? What are the emerging consensus for core patterns here if any?

GG: You are right to point out that having the ability to call an externalized authorization service provides a mechanism for the application to have the most current attributes and access decisions. However, there are a couple of places in the XACML architecture to consider as it relates to TOCTOU - freshness of attributes and caching of decisions.

Attributes: Fresh or Stale

First we'll look at the freshness of attributes that you describe, because it is the attribute values that will be processed in the policy evaluation engine (PDP). The XACML 3.0 specification is clear about what an attribute is. In terms of XML schema, it is defined as follows:

<xs:element name="AttributeDesignator" type="xacml:AttributeDesignatorType" substitutionGroup="xacml:Expression"/>

<xs:complexType name="AttributeDesignatorType"> <xs:complexContent> <xs:extension base="xacml:ExpressionType">

<xs:attribute name="Category" type="xs:anyURI" use="required"/> <xs:attribute name="AttributeId" type="xs:anyURI" use="required"/>

<xs:attribute name="DataType" type="xs:anyURI" use="required"/> <xs:attribute name="Issuer" type="xs:string" use="optional"/>

<xs:attribute name="MustBePresent" type="xs:boolean" use="required"/> </xs:extension> </xs:complexContent></xs:complexType>

There is no room for a timestamp or a freshness attribute in an attribute designator used in a XACML policy (set) which would indicate under which temporal terms a policy is relevant based on the freshness of incoming attributes.

Accordingly the incoming XACML request is made up of attributes and attribute values defined in the schema as follows:

<xs:element name="Attribute" type="xacml:AttributeType"/><xs:complexType name="AttributeType"> <xs:sequence>

<xs:element ref="xacml:AttributeValue" maxOccurs="unbounded"/> </xs:sequence>

<xs:attribute name="AttributeId" type="xs:anyURI" use="required"/>

<xs:attribute name="Issuer" type="xs:string" use="optional"/>

<xs:attribute name="IncludeInResult" type="xs:boolean" use="required"/>

</xs:complexType>

Again, there is no use or mention of a freshness / time attribute. This is coherent with the fact that the PDP is a stateless component in the architecture. It means that unless it is told so, it cannot know the freshness, the time of creation, or the expiry date of an attribute value.  It knows when it receives a request from a PEP or when it calls out to PIPs and can assume a date of retrieval but even so the value could be skewed by potential caching mechanisms. Even though it does know the time at which it receives a request from the PEP or the time at which it invokes the PIP(s), it does not mean it will be using those timestamps for access control unless, of course, the policy author explicitly asks to do so.

Let’s imagine the case of a role attribute. Let’s imagine we want to write a policy which says: grant access to data if

• the user is a manager and 

• if the role attribute is no more than 5 minutes old (5 minutes being the maximum freshness time)

This uses 2 attributes: role and freshness. The incoming request says “joe access data” which is not enough for the PDP to reach a conclusion. The PDP queries an LDAP via a PIP connector to retrieve the roles for Joe: “give me Joe’s roles”. Joe may have multiple roles; some of which may have been cached in the PIP attribute cache. In addition, each role has a freshness value (which could be the time at which the LDAP was queried for the value or the number of minutes since it was last queried for the given attribute). In that case, we may have different freshness values. Which one should be used? This shows there is a ternary relationship between the user, the role, and the freshness of the role. The simplest way for XACML to handle this today is to encode the freshness and the role value into a single attribute value e.g. manager::0h04mn37s. In that case the policy must match the attribute value using regular expressions (or XPath) to extract the role value on one hand and the freshness value on the other.

Is TOCTOU masquerading as a governance issue?

Another aspect of attribute freshness is the administrative or governance processes behind them. If attributes are not fresh, isn't this a governance failure? For example, how frequently does your provisioning process make updates? Once a day, once a week, every hour, or is it event driven and updates continuously? Your answer will provide additional input on how to manage the consumption of attributes by authorization systems. So this problem goes even beyond the Time to Check to the Time of Administration. 

Decision Cache
Next up, PEP decision caching is another area to examine to stay clear of TOCTOU issues. For performance reasons, PEPs can cache decisions for reuse so they don't have to call the PDP for every access request. Here, your TTL setting is the window of opportunity when the PEP could use an invalid decision if a privilege-granting attribute has changed. 

In short, XACML is an attribute-based language and helps you express restrictions / conditions some of which can be freshness of attributes. It is easier in that sense than previous AC models which did not allow for that. However, there has yet to be a standardized effort around the overall freshness challenge. And it is no longer a problem constrained to policy modelling. Attribute retrieval strategies, PEP and PIP implementations, caching, and performance all impact freshness (or the impression of freshness).

GP: Thanks very much to Gerry Gebel, David Brossard and Pablo Giambiagi for the conversation on these important topics

March 17, 2011 in Security, Security > 140, XACML | Permalink | Comments (0) | TrackBack (0)

Security > 140: Conversation with Gerry Gebel Part 2

Here is part 2 of a conversation with Gerry Gebel discussing XACML, specifically: Use Cases in its sweet spot, security architecture opportunities, getting people to grok it and process to optimize and integrate.

GP:Gerry last time we chatted we talked about the overall architecture of XACML and some of the needs it fills for security architects in 2011. For this thread I wanted to talk about concrete Use Cases. The one Use case that seems to come up a lot when showing the value of XACML is healthcare, these are quite fascinating use cases that include fine grained authorization and privacy concerns, but this is not something that applies to all systems.

I wanted to get your thoughts on deployments where XACML use cases are for core access control and integrity. In my mind, XACML fills a need in both Mobile and Cloud use cases.

There are so many different Cloud Use Cases and deployments, but one area where XACML seems to solve a clear and pressing need is for fine grained authorization in PaaS and SaaS, where the following conditions apply. 

* The Cloud Consumer would like to minimize use and control sharing of certain highly sensitive attributes, but would still like to shift the major parts of the software platform to PaaS/SaaS

* The Cloud Consumer (company) needs to store some sensitive data and funcitonality in PaaS/SaaS

* The user accounts are managed by the Cloud Consumer and the PaaS/SaaS relying party consume SAML tokens (or other)  at runtime

To me this looks like a core access control use case for XACML in the Cloud, where the run time access control resolves the policy dynamically based on the contents of the SAML token. Are there examples of this being done today?

GG: The scenario you are describing is covered by what I call the "anywhere architecture" that XACML addresses - see attached diagram. Access to cloud hosted data and resources can be controlled in a number of deployment configuration options. For example, you can have application specific PEPs at or near the resource, or utilize an XML gateway for the PEP. Policy servers (PDPs) can be installed on premises or hosted at the cloud provider. Similarly, policies can be managed where most appropriate and distributed to PDPs for run time enforcement. 

When combined with a federation approach to authentication, you get the best of both worlds. That is, users can authenticate to their home domain regardless of where they are and access data based on policy - wherever that data is hosted.

AnywhereArchitecture

We have been in discussions with some organizations that are investigating cloud hosting options for applications and data, but this is mostly in exploratory phases. What is reassuring to them is the fact that the XACML architecture does in fact cover such scenarios and and provides the flexibility they are looking for. 

While not a cloud-hosted system, the Swedish National Health Care system does use a combination of SAML and XACML to control access to patient data. So your premise that healthcare is an excellent use case for XACML controls is certainly true. However the requirement to restrict access to specific data elements holds true across many other industry segments.

GP: In the Mobile case, we know that the devices are occasionally connected. So some session data will be cached  locally on the device - for when you go into the elevator, drive through a tunnel, or the flight attendant tells you to turn off your phone before take off. 

We also know that mobile device usability is pretty good but from a usability standpoint we want to minimize re-authentication for the user and extended authentication dances. 

So both of these Mobile trends argue for delivering some content to the device that requires authorization out of band of the initial data flow - the ability to store a attribute cache and attach the policy to use if needed. This strikes me as right in the wheelhouse of XACML, have you seen these types of deployments?

GG: We are starting to see some interesting mobile uses cases emerge, but it's still a bit early to make many concrete statements on how an XACML model could accommodate them. That said, here are some preliminary thoughts:

1. If a policy decision is rendered while the mobile device is connected, that decision could be cached locally for some period of time and re-used for a subsequent access request. This technique is useful for performance and efficiency when online, but also quite effective when you are offline - provided that the TTL limit is sufficient. 

2. For full offline processing of authorization, you have to consider how much of the authZ apparatus can be tolerated on the mobile device. This includes the attributes and policies you mentioned, but also a decision engine of some sort. What might be more efficient is to preload decisions in some way. That is, if you possess a token or claim that indicates you are the owner of a health record - then you can view it.

GP: I used to struggle in how to convey XACML's value proposition to people so I would just show them the mechanics and hope they could see it. But now I say XACML is useful in two scenarios- one where your data is on the move and two when you have more fine grained access decisions to make. I am sure there are other scenarios but those seem to be two quite important issues that many companies have to deal with today and XACML can play a role in addressing.

As to the data on the move, business has already decided that this is happening, whether its Cloud, Mobile or just good old fashioned Web apps. Its our job in security to catch up to where the business is going. The idea that the data can be in motion, and crucially that it can carry its security policy with it is a very useful property for a security architect to have at their disposal. You mentioned in response to the Cloud question that when using XACML plus a "federation approach to authentication, you get the best of both worlds", this approach seems not only to be the best of both worlds, but one of the only ways to deliver on the flexibility of use case scenarios that the business is driving and the distributed technical architecture that runs those deployments.

Is there an end to end model (or models) emerging here? 

GG: When you say end-to-end model, it makes me think of an architecture that is sort of tightly coupled. What has emerged over the last few years is the ability to piece together application and infrastructure components based on a number of important standards, such as SAML, XACML, OAuth, WS-Security, WS-Trust, etc. The existence of standards makes it more feasible to build implementations that are flexible enough to support the kind of mobile models you are describing. 

Data on the move is one issue, but "access on the move" is another perspective that I think should be treated slightly differently. Certainly when a data element moves from protected storage to some other location, you may want to attach policy that controls its usage downstream. Access on the move is at least as equally compelling because in this case you have power users demanding the ability to perform their jobs from any computing device regardless of where they are located. Subsequently, it can be difficult to build an access model that applies the same policy rigor across all these channels and devices.
The challenge for security architects is develop solutions that accommodate mobile data and mobile workforces, instead of just saying 'no'.

GP: As a security architect, I take it as a given that authentication happens in one place and authorization happens in another. As a general rule of thumb I want my authentication process to happen as "close" to the user/subject as possible, where they know the most about the user; and I want my authorization processes to fire as "close" to the resource as possible, where the authorization process has the most specific knowledge as to the authorization request. XACML helps me resolve the back half of that problem

It strikes me that the combination of SAML to ferry the authentication/attribute assertions to Anywhere architecture (XACML) on the resource side closes some gaps that most security architectures have today, lots of system have strong authentication, some systems have good fine grained authorization, but almost all systems have systemic breakdowns in the middleware plumbing between those two, and those two domains - authentication and authorization have to work together.

Are you seeing the drivers for XACML being around the resource domain or enabling an end to end model?

GG: There are multiple drivers for externalized authorization based on XACML, but lets focus on a couple. First there is the audit and compliance angle (I recognize that the industry has beaten this issue to a pulp in the last couple years...). As someone told me recently, "if we can implement externalized authorization, then we will actually have a chance to answer auditors' questions when they ask about access to these applications." This is still a big issue for organizations, particularly those that are deploying more sophisticated applications - they must have more granular access to the application's functions and they prefer to do it in a standards-based way.

Second, an organization's most precious data is exactly what outsiders like partners and customers want access to. A policy language like XACML allows you to share exactly the right data and nothing more to the right users. You could say this is resource domain focused. Standards like SAML play a role because an organization doesn't want to, and shouldn't, manage credentials for most outside users. 

GP: Role based access control has done well in terms of adoption for many reasons, but I think one important reason has little to do with technologies, its that a Role is a real thing in an organization. Its easy for people to grasp that a DBA or a nurse or a stock trader should have a different set of privileges and that we can aggregate them into Roles.

In ABAC, we have a far more powerful, generic framework that can build much more robust authorization frameworks than RBAC. The generic part of ABAC is what gives it its architectural power, but in terms of communicating them it makes it much harder because you don't anchor the description on a fixed concept like a Role. It seems that patterns could fill this gap, but many of the patterns we have seen so far are things like delegation, which is important but refers to a domain specific authorization concern. Are there patterns that you are seeing emerge that help map some ABAC capabilities to organizational Parties, Places and Things?

GG: There has been a similar discussion on LinkedIn (http://linkd.in/ekLLqo) that has more than 180 comments, so far. The RBAC model has a huge head start in terms of mindshare - it has been promoted for something like 30 years. As you point out, a role is something that people recognize and it encapsulates at least a high level notion of what privileges should be for a person assigned to that role. One of the challenges with roles is that they are often not granular enough: doctors can view patient information, but it should only be for the patients assigned to them. Nurses can update patient status, but it is only permitted while they are on duty, from the nurse's station, and if they badged into the clinic. These kind of rules can be difficult to represent in a role or in an access policy. With an ABAC language like XACML, it is very easy to represent organizational or security policies in the XACML language. While it is not as "human readable" as a role, XACML policy is a much more comprehensive way to represent access policy where roles are one of the attributes that can be utilized. 

GP: As much as people think about computing technology, tools and protocols, real world deployment is all about people and process. How do you get organizations to grok XACML? What kinds of things do they need to do in the development and operational processes to get benefits from XACML?

GG: First they need to buy into the notion that externalized authorization is the right architectural approach for building and deploying applications. Fortunately, the number of architects thinking this way is growing and they are causing software makers to change course. Once you embrace, here are three important things to consider:


1. How will applications interface with the XACML service? In essence, this means how will the application construct an XACML request and process the XACML response for access decisions. Each application style is a little different in where the necessary attributes (subject, action, resource, environment) are stored so the PEP may need to be adjusted. In some scenarios, an XML gateway can be a low footprint way of dealing with web services applications. 

This also implies that developers and application owners need to buy into the externalized authorization model. Security pros will recognize this point as critical - think of how many times you have attempted to sell developers and app owners on a new idea, your mileage may vary.

2. Who will author and manage XACML policies? As with other access policies, administrators will work with application and security experts to construct XACML policies. Administrators learn this skill fairly quickly, but they do need some training and guidance at first.

3. Where are the attributes? Since XACML is an attribute based language, organizations may need to do some work to improve the care and maintenance of privilege giving attributes. For example, if a request comes into the PDP that Alice, a doctor, wants to update a patient record - the PDP may need to find out more information on Alice. Is Alice affiliated with the hospital where the patient is located? Is Alice the patient's primary physician assigned to the case? This information could be stored in a directory, database, patient record system, etc. Further, if these attributes are used to grant access, they must be properly maintained to ensure accuracy.

 

((Note- stay tuned for part 3 of this conversation: A Visit from Madame TOCTOU))

January 27, 2011 in Security, Security > 140, XACML | Permalink | Comments (2) | TrackBack (0)

Security > 140 Char Conversation with Gerry Gebel on Authorization

What follows is a conversation I had with Gerry Gebel on authorization and XACML. Gerry Gebel was formerly with Burton Group and is now President of Axiomatics (America). Axiomatics focuses on authorization and the XACML standard. These topics are important and do not lend themselves to sound byte chunks, I asked Gerry a number of questions on the space and found his answers quite interesting on authorization technology and trends. 

GP: Authentication gets so much attention in security, for example there are dozens of authentication types supported by SAML. This is due to the many, many attempts that people have made in improving authentication over the years, but authentication is really a guess. There are ways to make better or worse guesses, but once that guess is bound to a principal, the game changes. Authorization is mainly about solving puzzles (not guessing), it seems to me that infosec as a whole should spend more time getting their "puzzle logic" implemented right, ensuring that authorization rules have the coverage, depth and portability. Why is it that people have been so easily seduced by authentication and what can we do to get people to focus less on this quixotic pursuit and onto more solveable problems like authorization?

GG: The focus on authentication, in many ways, is justifiable because much authorization was embedded within the authentication process. If you can authenticate to the application, then you have access to all the information - in a general sense. This approach is manifested in many legacy applications, early security perimeter strategies, and first generation portals. In today's environment, authorization approaches must be much more discriminating due to regulatory, privacy, or business requirements. Further, the number of applications or resources that are not shared with an outside party has essentially been reduced to zero - the most valuable assets are the one of most interest to your partners and customers. This transition from "need to know" to "need to share" is shifting the focus to authorization and I believe we are seeing the early signs of enterprises placing more focus on authorization. Ultimately authentication remains important because we still need to know who is accessing the information although not necessarily their identity, but enough privilege granting attributes about them. 

 

GP: What is driving this shifting focus towards authorization? How much is driven by the management of sharing problem present in today's applications like Web applications, Web service, Mobile and Cloud, where architectures are so distributed and occasionally connected that they are forced to authenticate in space and authorize in another? And how much is driven by the applications themselves becoming more sophisticated with more functionality, data and layers that require more fine grained authorization? Can general purpose frameworks like XACML help in both the technology architecture and the language expressiveness or are there different patterns required?

GG: There are many forces at work that are changing the perspectives on how identity management functionality and services should be implemented. First, it is quite logical to have authentication taking place completely disconnected from the application or data resource - think of the federated model. Second, applications are much more sophisticated than what was being developed even a few years ago. The level of complexity and granularity must be met by an equally sophisticated and comprehensive authorization scheme. Finally, XACML is well suited to meet the complex requirements we are referring to. XACML is a mature standard (work on it began in Feb 2000) that is comprised of a reference architecture model, policy language and request/response protocol. The XACML architecture is well suited to protect application resources whether they are centralized in your data center or distributed across the data center, private clouds, public clouds, partners, etc. And the core XACML policy language, plus profile extensions such as the hierarchical resource profile, is capable of modeling very complex business rules that will address the vast majority of use cases.

 

GP: Bob Blakley's paper on "The Emerging Architecture of Identity Management" from earlier this year described three architecture models - first a traditional Push model, then a more dynamic future state based on a Pull Model, and a third hybrid model to move from Push to Pull to help enterprise to make incremental progress. The pure Pull model looks to solve what I regard as the single biggest security issue we face today - poor integration. Can you discuss how XACML based architectures play a role in these Push and Pull models, is XACML applicable in all models or are some more in its sweet spot than others?


GG: In fact, Bob includes XACML in his emerging architecture - referring products of this class as 'XACMLoids." The primary value XACML brings is in externalizing authorization from business applications - "Externalized Authorization Managers" is another excellent report that Bob has recently written on this topic. 


In the Push model, XACML systems have a smaller role since identity data is synchronized with or pushed to the application specific identity repositories. In the Pull model, applications call out to the XACML service for authorization decisions using the XACML request/response protocol. If additional attributes are needed to make an authorization decision, the PDP engine can retrieve attributes through a virtualization service. XACML works equally well in the hybrid model - the difference here is that the application does need to persist some identity information in a local repository. That said, the sweet spot for XACML-based systems is likely the pure Pull model or the hybrid scenario.


GP: So beyond, the XACML language framework can you briefly describe the components that an authorization system needs to implement XACML architecture in the pure Pull and Hybrid scenarios?

GG: The components identified in the XACML reference architecture are: 

Policy Decision Point (PDP): this component receives access requests, evaluates them against known policies, and returns a decision to the caller

Policy Enforcement Point (PEP): this component is located at or near the resource and can be integrated as a gateway, co-located with the application, or embedded within the same process as the application. PEPs basically construct the request in XACML format, forward it to the PDP over a particular protocol, and enforce the decision returned from the PDP

Policy Information Point (PIP): XACML is an attribute based access control (ABAC) model and therefore thrives on attributes. If the PEP does not send all the necessary attributes in the request, the PDP retrieves the additional attributes via the PIP interface.

Policy Administration Point (PAP): Here the policies are written, tested, and deployed - all the expected policy lifecycle management functions.

Policy Retrieval Point (PRP): This is where the policies are stored by the PAP and retrieved by the PDP.

 

GP: Given that logical architecture, what do you typically see or recommend in terms of physical deployment? Seems like a minimum would include separate PEP, PDP, and PAP instances. The PIP and PRP would be combined with PEP/PDP and PAP respectively. Is this is a way to get started on building the foundation?

GG: Some of these deployment configurations will be product dependent, but in general here are tome typical topologies for an XACML system:

1. Shared PDP service: You should have at least 2 PDP instances for availability and business continuity. Additional PDP instances can be deployed for scalability as each server instance is stateless.

2. Embedded PDP: For low latency scenarios, the PDP can be embedded directly in the application container

3. Attribute sources: The PDP, via the PIP interface, can connect to several attribute sources directly as one option. A second option is to use a virtual directory as the attribute source manager. Finally, when a persistent, consolidated attribute store is required then privilege granting attributes can be synchronized into a directory.

4. PAP service: Typically this function will be run in an offline mode. Most of the work happens when a new application is onboarded to the environment and we find that XACML policies are quite stable and don't require daily or even weekly adjustments. 

5. PRP repository: Obviously the PAP will store XACML policies in the repository but the enterprise may have a preferred option for putting policies into production. Operational procedures must also take into account how many PDPs are installed locally or distributed throughout the network. For example, you could utilize database or directly replication to promote new policies into production.

6. PEP integration with applications: Here you can get started by integrating the XACML system with an XML gateway as a great way to get started, which has a low impact to the existing environment. For more advanced scenarios, you can integrate application environment specific PEPs into your business applications.

GP: I look at the domain of information security like a triangle - there AAA services for Identity and Access Management, Defensive Services like monitoring and logging, and finally Enablement services that help to integration AAA and Defensive services in to the organization. XACML was designed to handle parts of the AAA and Enablement challenges, but how can we use XACML and authorization services to improve our Defensive posture? What ways have you seen to implement more robust logging and monitoring through the Authorization layers?

GG: The XACML PDP engine should be instrumented so it can provide important information to the logging and monitoring apparatus of the enterprise. At the basic level, the monitoring system can track the number of permit and deny decisions to watch for anomalies. Further, alerts can be triggered if certain thresholds are exceeded - maybe the same user getting repeatedly denied access or a particular IP address making excessive requests. 

Part of the Conversation is here

December 15, 2010 in Security, Security > 140, XACML | Permalink | Comments (0) | TrackBack (0)

authN and everything after

James McGovern has been beating the XACML drum for a long time. Jackson Shaw opines the reason we don't see many vendors supporting XACML is:

none of the usual suspects have products in this area: Sun, Microsoft, Novell. His answer was quick and short: There's not enough services revenue required for these products.

If companies actually implemented consistent authorization, there would be _plenty_ of services required, anyone who has rewritten authZ and security policy in an app knows this. The real issue I suspect is a something a little closer to marketecture. Remember the identity management boom is relatively recent and a lot of it is driven by compliance.

Surprised auditor: "What do you mean you don't know how many people in this directory still work here?"

But you see, our friends at Sun, IBM, et. al. are very motivated to solve this part of the problem. They have connectors that can enable companies to load more user accounts into their systems (most of these guys charge on a per user account basis), so it means more dollars for them, heck they should give these tools away, they would make back the license cost in short order through having more users in their systems. It is the same reason why for years Oracle supported SQLoader to optimize super fast bulk loads _into_ Oracle. Meanwhile they didn't also have SQLUnloader to optimize fast removal of data from Oracle. Funny, that.

Anyhow, XACML is about policy and authorization and companies are perfectly happy if you authZ against _their_ container, their JNDI tree, their whatever. Companies that have other dogs in the hunt are in no way incented to solve this problem. Yet a problem it remains. A company that specialized more enabling technologies that like perhaps Vordel or CA is in a better position to solve this problem than a company that really want to sell you a bigger database/CRM/ECM/whatever. While vendors don't care, it is a big issue for those in companies with their hands on the wheel, which is why we launched a XACML working group at OWASP. The best possible current practice of IDM solves at best 1/2 of the identity problem.

November 25, 2007 in Security, XACML | Permalink | Comments (1)

My Photo

SOS: Service Oriented Security

  • The Curious Case of API Security
  • Getting OWASP Top Ten Right with Dynamic Authorization
  • Top 10 API Security Considerations
  • Mobile AppSec Triathlon
  • Measure Your Margin of Safety
  • Top 10 Security Considerations for Internet of Things
  • Security Checklists
  • Cloud Security: The Federated Identity Factor
  • Dark Reading IAM
  • API Gateway Secuirty
  • Directions in Incident Detection and Response
  • Security > 140
  • Open Group Security Architecture
  • Reference Monitor for the Internet of Things
  • Don't Trust. And Verify.

Archives

  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015

More...

Subscribe to this blog's feed