Sunday, April 14, 2013

Code Access Security in ASP.NET 4.0

copyright: www.simple-talk.com


In the third, and final article that introduces Code Access Security in .NET Framework 4.0, Matteo explains, with examples, how the Level2 Security Transparent Model works within a hosted ASP.NET environment.
In previous articles we have seen how the Code Access Security model changed in .NET Framework 4.0.
In What's new in code access security in .NET Framework 4.0 - Part I we saw how the CAS Policy System that was used until .NET Framework 3.5 has now been replaced by the Level2 Security Transparent Model. Permissions to use the protected resources granted to an assembly have been moved from the assembly itself to the host in which the assembly runs. All assemblies in a host now have the same security restrictions, thereby conforming to the Homogeneous Domain concept.
In What's new in code access security in .NET Framework 4.0 - Part II we saw that, despite the Level2 Security Transparent Model being apparently all-or-nothing, it is, in fact, possible to use Allow Partially Trusted Caller Attribute (APTCA) to mix together SecurityTransparentSecurityCritical and SecuritySafeCritical attributes to define granular permissions to grant to an assembly when it need to access protected resources.
In the two previous articles, we have demonstrated how the new CAS technology works, by providing some examples of simple console applications. We said that, in these cases, there is no host to manage, because any simple application will run as an unhosted application, always as full trust code.
In this article we want to analyze how the Level2 Security Transparent Model works within a hosted environment. To do so, we will consider the most important hosted environment that is used today, the ASP.NET Application Domains.
We will start by analyzing how ASP.NET application domains have been modified so as to implement the Level2 Security Transparent Model. We then see how to use configuration files to specify the permissions to grant to assemblies loaded inside these application domains. We will do this with the aid of some examples. Finally, we will see how to use APTCA assemblies in ASP.NET to define, in a more granular way, different permissions for different blocks of code, when more flexibility is required.

ASP.NET 4.0 Application Domain

As described in the MSDN library, an application domain is, “...a construct that hosts use to isolate code running within a process...”
We know that, when a managed application is executed, the .NET runtime is able to create an application domain in which the assemblies are loaded and executed. For security reasons, an application domain is isolated from other application domains, and the assemblies loaded inside it cannot overpass its boundaries.
Prior to .NET Framework 4.0, an ASP.NET application domain’s boundaries would always be executed as full trust. The old CAS Policy System was responsible for granting permissions for each group of code (defined using the code’s evidence) contained inside the application domain and those conditions were verified at the group level. This situation led to a heterogeneous application domain, in which different code groups have different permissions to execute. Because the PermissionSets assigned to each code group was generated from different sources, it was common to see a mixture, or sometimes an overlapping, of permissions.
The .NET Framework 4.0 removes even this behavior by removing the entire CAS Policy System. The new ASP.NET application domain now uses the Level2 Security Transparent Model, and the permissions granted to assemblies inside it are now defined on its boundaries. This makes the application domain a partially trusted environmentand code inside it becomes Security Transparent. Because the permissions defined for the application domain are granted in the same way to all the assemblies inside it, the application domain becomes a homogeneous application domain.

ASP.NET Trust Policies

Despite of what we have said, the default behavior for ASP.NET 4.0 application domain is still to run as full trust environment. To be able to run it as a partially trusted domain we need to set a Trust Policy for it.
ASP.NET 4.0 permits four different Trust LevelsFullHighMedium and Low. While Full (the default value) is used to create a full trusted application domain, the other three generate a partially trusted application domain, giving a set of permissions to it that are defined on the .NET Framework configuration files related to each level.
Configuration files are contained in the Config folder of your .NET Framework 4.0 installation directory (normally C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config). These files are named web_trust.config, where is the desired trust level. The following image shows a screenshot of one of this file, theweb_mediumtrust.config file.
Figure 1: screenshot of the web_mediumtrust.confing configuration file.
If you take a look at it, you will see that the configuration files contain this kind of node:
  1. a set of  xml nodes. They allow you to assign a common name to an assembly. The common name is specified in the Name xml attribute of the node, while the full assembly name is defined in the Description xml attribute. The common name is then used on the rest of the file to refer to it easily.
  2. a <NamedPermissionSets /> xml node. It contains:
  3. a set of <PermissionSet /> xml nodes. Each of them defines a set of permissions that can be applied to an application domain.
  4. a set of  nodes assigned to a specific node. Each of them specifies a single permission granted to the application domain.
A configuration file defines three different PermissionSets:
FullTrust: It contains no permission at all (no  xml nodes are defined). It specifies the directiveUnrestricted=”True”. With it, all the permissions not mentioned on the permission set have full right to be executed. So, if the permission list is empty, the application domain will be a full-trust application domain. All code inside it, unless otherwise specified (as for example, marking the assembly as SecurityTransparent), will run as SecurityCritical.
Nothing: Same as the previous but it doesn’t contain the Unrestricted=”True” attribute. Without it, all the permissions that remain unspecified will have no right to execute. Because the list of permissions is empty, the application cannot execute.
ASP.Net: It represents the default value. It contains a list of all the permissions granted to the application domain. When this value is used, assemblies inside it run as SecurityTransparent code.
At this stage, it could be that you’ll notice something puzzling. It would seem that the following combination:
Trust Level
NamedPermissionSet 
Full
n.a.
Hight
FullTrust
Medium
FullTrust
Low
FullTrust
perform exactly the same things. The application domain is a full trust application domain and the code inside it isSecurityCritical (unless otherwise specified). This is exactly what happens. The explanation as to why this happens is left out by this article due to space limits. To understand it, try to perform a search on the internet about the HostSecurityPolicyResolver class and how it works.
Another thing that can sound strange is the definition of the Nothing PermissionSet. What would be the point of developing an application and then preventing it from executing?
This PermissionSet is useful for administrative purpose. For security reasons, a web server administrator can set the Nothing PermissionSet at the server level, leaving to developers the ability to modify it at the site level. As you will see soon, the trust policies are defined on the web.config file of an ASP.NET application. Administrator can set up the machine’s web.config file with the Nothing trust level, allowing the developer to set its own configuration on the web.config of its application.
Let’s now do some examples. You can find the source code in the supported document section of this article.

A Demo ASP.NET Application

As a demo application, we write a simple dll library, EnvironmentBrowser.dll, that is able to get the list of all the environment variables defined on the machine that hosts the application. We then write an ASP.NET web application that shows the result to the user. We add some useful methods to the web application that are able:
  1. to show the security characteristics of the application domain and
  2. to show the security characteristics of the most interesting (in relation to the goals of the application) methods.
Our EnvironmentBrowser.dll library is made by a single class defined as below:
    public class EnvironmentBrowser
    {
        ///
        /// Get a list of all environment variables defines for the machine
        ///
        ///
        public static string GetEnvironmentVariableList()
        {
         
            IDictionary items =
                Environment.GetEnvironmentVariables(EnvironmentVariableTarget.Machine);

            StringBuilder sb = new StringBuilder();
            foreach (string x in items.Keys) sb.Append(x + "; ");
            return sb.ToString();
        }
    }

        ///
        /// Get the user name from the environment variables
        ///
        ///
        public static string GetUserName()
        {
            return Environment.GetEnvironmentVariable("USERNAME");
        }
It defines these two methods:
GetEnvironmentVariableList(): It gets all the environment variables defined at the machine level and constructs a string that contains all their names separated by a semicolon.
GetUserName(): It returns the username of the logged user.
Our ASP.NET application is made by only a single page that contains the following code:
        ///
        /// OnLoad Override
        ///
        ///
        protected override void OnLoad(EventArgs e)
        {
            base.OnLoad(e);

            WriteDomainProperties();

            WriteMethodsProperties();

            WriteEnvironmentData();

        }

        ///
        /// Write the domain properties
        ///
        private void WriteDomainProperties()
        {

            AppDomain app = AppDomain.CurrentDomain;

            try
            {
                WriteToPage("Is Homogeous: " + app.IsHomogenous);

                WriteToPage("Is FullTrusted: " + app.IsFullyTrusted);

                WriteToPage("Permisson Set Count: " + app.PermissionSet.Count);

            }
            catch (Exception ex)
            {
                WriteException(ex);
            }

        }

        ///
        /// Write the security property's value of the principal methods.
        ///
        private void WriteMethodsProperties()
        {
            try
            {
                WriteToPage("Method WriteEnvironmentData(): "
                                      + GetMethodProperty (new PageDefault(), "WriteEnvironmentData"));

                WriteToPage("Method GetEnvironmentVariableList(): "
                                     + GetMethodProperty (newEnvironmentBrowser(),"GetEnvironmentVariableList"));

                WriteToPage("Method GetUserName(): "
                                     + GetMethodProperty (new EnvironmentBrowser(),"GetUserName"));
            }
            catch (Exception ex)
            {
                WriteException(ex);
            }

        }

        ///
        /// Write the list of the environment variable defined for the machine
        ///
        public void WriteEnvironmentData()
        {

            try
            {
                WriteToPage("
Environment Variables: "
);

                WriteToPage(EnvironmentBrowser.GetEnvironmentVariableList());
            }
            catch (Exception ex)
            {
                WriteException(ex);
            }

            try
            {
                WriteToPage("
Current User: "
 + EnvironmentBrowser.GetUserName());
            }
            catch (Exception ex)
            {
                WriteException(ex);
            }
        }

        ///
        /// Get the Security Property of a method of an object
        ///
        ///
        private string GetMethodProperty(object obj, string methodName)
        {

            Type t = obj.GetType();

            MethodInfo m = t.GetMethod(methodName);

            if (m.IsSecurityTransparent) return "SecurityTransparent";
            if (m.IsSecuritySafeCritical) return "SecuritySafeCritical";
            if (m.IsSecurityCritical) return "SecurityCritical";

            return String.Empty;
        }

For brevity, we have omitted all the other code not needed by our demo. The ‘code behind’ of the page implements the following methods:
  • WriteDomainProperties():   It writes the security properties of the application domain.
  • WriteMethodsProperties(): It writes the security properties of the main methods of the demo application.
  • WriteEnvironmentData():     It writes the environmental variables list and the username of the user logged to the system.
  • GetMethodProperty():           It gets the value of the security property of a method of an object.

The Full Trust Level

We know that the default trust level for ASP.NET 4.0 is the Full trust level. With it, the application domain is fully trusted, and the code inside it is SecurityCritical.
If we launch our application we get a response like this:
Figure 2: Response of the web application that runs in Full trust level.
The application domain is homogeneous and no permissions are enforced to it. As expected, it runs as full trusted and all the monitored methods run as SecurityCritical code. The application is able to get the list of all the environment variables and display the username of the user logged.

The High Trust Level

Now we move the trust level of the application to the High trust level. To do so, we need to modify the web.config file of the same by adding the following lines:
  <system.web>
    <trust level="High" />
  </system.web>
It states that, when the application domain starts, the web_hightrust.config file must be loaded and the ASP.NetPermissionSet (the default value) must be used.
If we now launch our application, we get the following output:
Figure 3: Response of the web application that runs in High trust level.
With the High trust level, that application domain became partially trusted and all the methods inside it now ran as SecurityTransparent methods. We have sandboxed our application domain.
When the WriteDomainProperties() executes as SecurityTransparent code, it is no more able to get the number of permissions from the PermissionSet property, because its accessor SecurityCritical. An error message is therefore displayed.
Moreover we are still able to have the list of all the environment variables defined on the machine that runs the application. This because the High trust level does not prevent the environment variables to being read. In fact, if you take a look at the web_hightrust.config file you will find the following lines of code:
<IPermission
        class="EnvironmentPermission"
        version="1"
        Unrestricted="true"
       />
The “Unrestricted=”True” attribute that is applied to the EnvironmentPermission states that the code to which the permission applies has full access to the resources protected by the permission.

The Medium Trust Level

If now we change the trust level to Medium in order to see what happens.
  <system.web>
   
    <trust level="Medium" />
  </system.web>
Our output will be:
Figure 4: Response of the web application that runs in Medium trust level.
In this case we are no longer able to get the list of all the environment variables, while the username of the logged user is still accessible.
By inspecting the web_mediumtrust.config file, the directive related to the EnvironmentPermission has changed:
<IPermission
          class="EnvironmentPermission"
          version="1"
          Read="TEMP;TMP;USERNAME;OS;COMPUTERNAME"
        />
The Medium trust level allows the user to be able to read only the variables TEMP, TMP, USERNAME, OS and COMPUTERNAME. So we are able to get the username of the logged user but not the other information.

The Low Trust Level

We now try to use the Low trust level:
<system.web>
   
    <trust level="Low" />
</system.web>
The application will generate the following output:
Figure 5: Response of the web application that runs in Low trust level.
In this case neither the username nor the environment variables became accessible.
If you try to browse the web_lowtrust.config file you will see that it does not mention EnvironmentPermission at all, making it totally unavailable.

Analysis of the Exceptions

As some of you have probably noticed, the exceptions messages seen in the previous paragraph seem strange. We have said that, with a trust level lower than Full, assemblies that execute inside the application domain are marked as SecurityTransparent, but, while the attempt to get the number of permissions granted to the application domain result in a SecurityCritical violation, as expected, the exceptions related to the Environment variables browsing involves the violation of a EnvironmentPermission check. This seems have nothing to do with the SecurityTransparent code attempts to use SecurityCritical code. And this is true. So, what makes this happens ?
We know that the System.Environment class is implemented in the mscorlib.dll assembly of the .NET Framework. If we use Red Gate’s .NET Reflector to inspect the mscorlib.dll, and in particular the methodsEnvironment.GetEnvironmentVariables() and Environment.GetEnvironmentVariable() that are used in ourEnvironmentBrowser class, we see that this two method are implemented as SecuritySafeCritical:
Figure 6: Browsing with the .NET Reflector the Environment.GetEnvironmentVariables() method.
We know that, for what we have said in the WHAT’S NEW IN CODE ACCESS SECURITY IN .NET FRAMEWORK 4.0 – PART II article, this is possible only if the assembly is an assembly marked with APTCA.
With .NET Reflector we see that this is true:
Figure 7: The mscorlib.dll assembly’s attributes list.
The SecurityTransparent method WriteEnvironmentData() calls the two SecuritySafeCritical methodsEnvironment.GetEnvironmentVariables() and Environment.GetEnvironmentVariables(), so no exception occurs.
But when the partially-trusted application domain executes some (SecurityTransparent) code inside it, every security demand generates a stack walk to see if the code has the permission to access some protected resources. If, when the stack walk reaches the application domain boundaries, a security violation is detected, an exception is thrown.
In our demo, the code triggers the demand for the permission to get the environment variables list: When it reaches the application domain boundaries, the exception is thrown if the trust level is lower that High, This explains the exceptions condition that we observe in our demo.

The Conditional APCTA

Suppose that we now want to be able to get the username of the logged-in user, even if we are in Low trust level whilst, at the same time, protecting all the other environment variables. Obviously, the simplest way to do so is to modify the web_lowtrust.confing file to allow it to read the USERNAME environment variable or we can generate our own custom configuration file.
For the purpose of this article we will use a different approach. We have seen that the partially trusted application domain, when a demand is made, starts a stack-walk that reaches the application domain boundaries, and, withLow trust level, an exception occurs. To prevent this behavior, we can perform an Assert on our code in order to stop the stack-walk. As we know, we cannot perform an assert in SecurityTransparent code, so we need to “transform” our GetUserName() method in a SecuritySafeCritical method. This can be done by modifying our EnvironmentBrowser.dll adding to it the APTCA. We do not want to stop here. We want our dll to use the APTCA only in our application but, if another application (or another host in general) tries to use it, it can use the its code as SecurityTransparent. To do so, .NET Framework 4.0 uses the so-called conditional APTCA. This is declared as follow:
[assembly:AllowPartiallyTrustedCallers(PartialTrustVisibilityLevel=PartialTrustVisibilityLevel.NotVisibleByDefault)]
This stated that the APTCA is not visible by default but only to hosts that are setup to use it.
To keep the attribute visible to our web application we need to add the following lines on code in our web.config file:
     <partialTrustVisibleAssemblies>
                <add
                   assemblyName="EnvironmentBrowser"
                    publicKey="00240000048000009400000006020000 ...."
                />
      </partialTrustVisibleAssemblies>
We need to add the EnvironmentBrowser.dll assembly to the partially-trusted visible assemblies list of the application domain that is associated with our application.
You can see from the previous declaration that, in order to add EnvironmentBrowser.dll to such a list, we need not only to specify the assembly name, but even the public key associated with it (for brevity, we have reported only a portion of it being the same a string made by 320 characters). The public key is the one that we used to generate the strong name of the assembly (that is a necessary condition). To get the value for the public key you can use the sn.exe command line in this way:
sn.exe  -Tp
In our case we obtain:
Figure 8: sn.exe output for the EnvironmentBrowser.dll assembly
Now we can modify our GetName() method as follows:
    ///
        /// Get the user name from the environment variables
        ///
        ///
        [SecuritySafeCritical()]
        public static string GetUserName()
        {
           
            EnvironmentPermission permission = new
                     EnvironmentPermission(EnvironmentPermissionAccess.Read, "USERNAME");

            permission.Assert();

            return Environment.GetEnvironmentVariable("USERNAME");
        }
We have marked the method as SecuritySafeCritical and we have created an object of typeEnvironmentPermission. In its constructor we have set the permission to read the USERNAME environment variable. Then we have inserted the call to the Assert method of the object.
The final step is to compile our EnvironmentBrowser.dll library and install it in the Global Assembly Cache (that is a necessary condition).
By running our application we obtain:
Figure 9:  Response of the web application running in Low trust level and using the Conditional-APTCA for the EnvironmentBrowser.dll.
You can see that now the GetUserName() method has become SecuritySafeCritical. The Assert prevents the stack walk reaching the application domain boundaries and, in this way, we are able to get the username of the user logged into the system, as expected.

Conclusion

With this third article about Code Access Security in .NET 4.0, we have completed the review of all the main changes on the CAS system introduces in .NET Framework 4.0. Things changed a lot but, despite of the time needed to learn the new features and how they works, my personal experience has taught me that all the efforts spent will be rewarded when, at the moment to put it on work, things now are really more simple, and less “time, and mind, consuming”.
With these three articles, I hope I was able to assist you to make the transition to the new CAS 4.0 model and to help you to get all the benefits that the new model can bring in your work as a developer.

What's New in Code Access Security in .NET Framework 4.0 - Part 2

copyright: www.simple-talk.com


Having introduced us to the basics of the new Code Access Security Model available in .NET Framework 4.0, Matteo Slaviero explains how to use this powerful new system to implement fine-grained code security in ways where have never before been possible.

Introduction

This article is the second of a series of twowhich introduce how Code Access Security has changed in .NET Framework 4.0.  In the first article, we were introduced to the new .NET Framework 4.0 Level2 SecurityTransparence model and given some examples of its implementation. We’ve had a glimpse of the kind of changes which must be applied at assembly level in order to keep our code secure, and we have also seen that, with the new model, the host plays a principal role in defining what kind of resources can and cannot be accessed.
from what we saw previously, it seems that the new Level2 SecurityTransparence model is an all or nothing technology; If the assembly is fully trusted, all resources are available, and if it is only partially trusted, none of them are.
Thankfully, this is not the case, as we will see In this article. When protecting resources, in order to permit a more granular approach to security, an assembly can be marked with the Allow Partially Trusted Callers Assembly(APTCA) attribute. In this way, Security attributes become available at class or method level, bringing to more flexible configurations.  
Another important thing we will see is that, with the Level2 SecurityTransparence model, it is now possible to easily protect resources beyond the classical CAS resources defined in the .NET Framework, and we’ll call these new kinds of resources “custom resources”. Finally, we’ll finish this investigation into the new CAS implementation by describing how a new tool, called Security Annotator tool, can help us to discover the correct way to mix theSecurityCritical and the SecuritySafeCritical attributes to implement our desired security strategy. Without further ado, let’s get started.

The Allow Partially Trusted Callers Attribute (APTCA)

The Allow Partially Trusted Callers Attribute (APCTA) is an assembly-scoped attribute which changes how the assembly responds to the Level2 Security Transparence model.  When used, the following modifications take place:
  1. All the classes and methods inside the assembly became SecurityTransparent unless otherwise specified.
  2. To specify different behavior, the SecurityCritical or SecuritySafeCritical attributes can be added to desired class and/or method implementations.
The APCTA attribute is very similar to the SecurityTransparent attribute used in the previous article, which we used to force an assembly to run as SecurityTransparent. As a result, when the caller assembly tried to accessSecurityCritical code, an exception was thrown (remember the PermissionSet property?) As mentioned, the main differences among the two attributes lies in the fact that, when the APCTA attribute replaces theSecurityTransparent attribute, we are able to directly specify security settings for each class or method in an assembly through the use of SecurityCritical and/or SecuritySafeCritical attributes. If the assembly were marked asSecurityTransparent, these two attributes would have no effect, due to the fact that the SecurityTransparent attribute only works at the assembly level, and no lower.
So, with the APCTA attribute we are able to:
  1. Elevate the permissions of an individual class or method, transforming it into a SecuritySafeCritical class or method. By doing so, we grant the class or method all permissions to access protected resources (asSecurityCritical code) while it remains visible to SecurityTransparent code. Essentially, we create a sort of bridge between SecurityTransparent and SecurityCritical code.
  2. Keep some classes or methods protected from the partially trusted assembly by marking them asSecurityCritical.
As we will soon see, these two features remove the supposed “all or nothing” behavior of the SecurityTransparentattribute. To prove this, we’ll start by reusing the example provided in the previous article, with some modifications:
[assembly:AllowPartiallyTrustedCallers()]
      
namespace CassAssemblyInfo
  {
    
///
    /// Demo class
    ///

        
public class AssemblyInfo
    {
      
///
      /// Write to the console the security settings of the assembly
      ///

          
public  string GetCasSecurityAttributes()
      
{
        
//gets the reference to the current assembly
            
Assembly a Assembly.GetExecutingAssembly();
            
StringBuilder sb = new StringBuilder();
        
//show the transparence level
            
sb.AppendFormat("Security Rule Set: {0} \n\n"a.SecurityRuleSet);
        
//show if it is full trusted
            
sb.AppendFormat("Is Fully Trusted: {0} \n\n"a.IsFullyTrusted);
        
//get the type for the main class of the assembly
            
Type t a.GetType("CasAssemblyInfo.AssemblyInfo");
        
//show if the class is Critical, Transparent or SafeCritical
            
sb.AppendFormat("Class IsSecurityCritical: {0} \n"t.IsSecurityCritical);
            
sb.AppendFormat("Class IsSecuritySafeCritical: {0} \n",
            
t.IsSecuritySafeCritical);
            
sb.AppendFormat("Class IsSecurityTransparent: {0} \n",
            
t.IsSecurityTransparent);
        
//get the MethodInfo object of the current method             
            
MethodInfo m t.GetMethod("GetCasSecurityAttributes");
        
//show if the current method is Critical, Transparent or SafeCritical
            
sb.AppendFormat("Method IsSecurityCritical: {0} \n".IsSecurityCritical);
            
sb.AppendFormat("Method IsSecuritySafeCritical: {0} \n",
            
m.IsSecuritySafeCritical);
            
sb.AppendFormat("Method IsSecurityTransparent: {0} \n",
            
m.IsSecurityTransparent);
            
try
        
{
              sb.AppendFormat
("\nPermissions Count: {0} \n"a.PermissionSet.Count);
        
}
            
catch (Exception ex)
        
{
              sb.AppendFormat
("\nError while trying to get the Permission Count:
                              {0} \n"
ex.Message);
        
}
            
return sb.ToString();
      
}
    }
  }  
With respect to the previous version of this dll library, we have inserted the following code prior to the namespace declaration:
[assembly:AllowPartiallyTrustedCallers()]
… which states that our assembly is now an APCTA assembly. We have also added the following lines of code:
//get the MethodInfo object of the current method MethodInfo m t.GetMethod("GetCasSecurityAttributes");//show if the current method is Critical, Transparent or SafeCriticalsb.AppendFormat("Method IsSecurityCritical: {0} \n"m.IsSecurityCritical);sb.AppendFormat("Method IsSecuritySafeCritical: {0} \n", m.IsSecuritySafeCritical);sb.AppendFormat("Method IsSecurityTransparent: {0} \n"m.IsSecurityTransparent);
… which allow us to see if the GetCasSecurityAttributes method is SecurityCriticalSecuritySafeCritical orSecurityTransparent. By running the console application which we used to consume the previous assembly (and which you can download at the top of this article), we obtain the following output:
Figure 1. The console output of our modified demonstration program
Looking at figure 1, we can quickly see that:
  1. The assembly is running on the local computer,
  2. The assembly is fully trusted, but the AssemblyInfo class is transparent, and …
  3. Even the GetCasSecurityAttributesmethod is transparent;
  4. When trying to get the PermissionSet.Count value, we get an exception which reminds us that the assembly is marked with the APTCA attribute, so all of its classes and methods are SecurityTransparen, and cannot call SecurityCritical code.
At this point, it seems that we’re observing the same behavior we would have obtained by using theSecurityTransparent assembly attribute, so where is the difference? The difference lies in the fact that the APTCA attribute allow us to define the Security level of the code in a more granular way. With it, we can directly modify the Security level of the GetCasSecurityAttributes method, making it SecurityCritical or SecuritySafeCritical. At this point, we’ll choose to set it as SecurityCritical:
///
/// Write to the console the security settings of the assembly
///

    
[SecurityCrtitical()]
    
public string GetCasSecurityAttributes()    {
… and by running the .exe a second time, we obtain the following result:
Figure 2. Running the demonstration program with fine-grained control of method security level in place.
As you can see, the exception message has disappeared because, even if the class is SecurityTransparent, the underlying method is now SecurityCritical and can execute the PermissionSet property’s accessor. Just to demonstrate the difference between the APTCA and SecurityTransparent attributes, if we replace the following line:
[assembly:AllowPartiallyTrustedCallers()]
… with:
[assembly:SecurityTransparent()]
which we used in Part I of this short series (as I mentioned at the start of this section), we get  a familiar output:
Figure 3. Running the demonstration program with assembly-level SecurityTransparency in place, and no fine-grained control.
As expected, the SecurityCritical attribute on the GetCasSecurityAttributes now has no effect, and the method remains SecurityTransparent.

Custom Resources

Despite the simplicity of the previous example, SecurityCritical and SecuritySafeCritical attributes can be mixed together in APCTA assemblies in very different ways to set up custom protection strategies. Rather than always invoking the same classical protected resources of a system, let’s look at an example that shows how the Level2 Security Transparence model can be used to protect any type of resource we want, thus going beyond the legacy CAS Policy model. Consider the following CasWriter class, defined inside a demo assembly namedCasWriter.dll:
[assembly: AllowPartiallyTrustedCallers()]
      
namespace CasWriterDemo
  {
    
///
    /// Write sentences
    ///

        
public class CasWriter
    {
      
///
      /// Write a sentence to console
      ///

      ///
          
public void WriteCustomSentence(string text)
      
{
            Console.WriteLine
(text "\n");
      
}
      
///
      /// Write a sentence to console
      ///

      ///
          
public void WriteDefaultSentence(int index)
      
{
            
switch (index)
        
{
              
case 0:
              
WriteCustomSentence("homo homini lupus");
              
break;
              
case 1:
              
WriteCustomSentence("melius abundare quam deficere");
              
break;
              
case 2:
              
WriteCustomSentence("audaces fortuna iuvat");
              
break;
        
}
      }
      
///
      /// Get the Security status of each method developed
      ///

          
public string GetMethodsSecurityStatus()
      
{
        
//get the MethodInfo of each method
            
MethodInfo[] infos GetType().GetMethods();
            
StringBuilder sb = new StringBuilder();
            
foreach (MethodInfo m in infos)
        
{
              
if (m.ReturnType != typeof(void)) continue;
              
sb.Append("\n");
              
sb.Append(m.Name ": ");
              
if (m.IsSecurityCritical)
          
{
                sb.Append
("SecurityCritical\n");
          
}
              
else if (m.IsSecuritySafeCritical)
          
{
                sb.Append
("SecuritySafeCritical\n");
          
}
              
else if (m.IsSecurityTransparent)
          
{
                sb.Append
("SecurityTransparent\n");
          
}
        }
            
return sb.Append("\n\n").ToString();
      
}
    }
  }
The class has the following three static methods:
  • WriteCustomSentence(string text): this method writes a sentence, passed to it as input, to the console.
  • WriteDefaultSentence(int index): This method writes a fixed sentence to the console, selecting from among three possible values. The input parameter states which sentence to write.
  • string GetMethodsSecurityStatus(): This method returns, as a string, the Security status of the two previous methods.
Now we write a console application (CasWriterDemo.exe) that consumes the previous methods:
[assembly:SecurityTransparent()]
      
namespace CasWriterDemo
  {
        
class Program
    {
          
static void Main(string[] args)
      
{
            CasWriter writer 
= new CasWriter();
            
Console.WriteLine(writer.GetMethodsSecurityStatus());
            
try
        
{
              Console.Write
("Custom Sentence: ");
              
writer.WriteCustomSentence("Barba non facit philosophum");
        
}
            
catch (Exception ex)
        
{
              Console.WriteLine
("\n\n" ex.Message "\n\n" );
        
}
            
try
        
{
              Console.Write
("Default Sentence: ");
              
writer.WriteDefaultSentence(new Random().Next(02));
        
}
            
catch (Exception ex)
        
{
              Console.WriteLine
("\n\n" ex.Message);
        
}
            Console.ReadKey
();
      
}
    }
  }
We have marked the CasWriterDemo.exe assembly as SecurityTransparent because we want to test what happens when the CasWriter.dll assembly is called by partially trusted code.
Given that the CasWriter.dll is marked with the APTCA attribute, all the code inside it is SecurityTransparent, and so we should expect that the application will run correctly. We are in a situation where SecurityTransparent code calls other SecurityTransparent code, and the Level2 SecurityTransparent model certainly allows this. Running the application, we obtain the following result:
Figure 4. Testing the new demonstration CASWriterDemo program.
We see from figure 4 that, as expected, the two methods are both SecurityTransparent and the sentences are correctly written to the console. Now suppose that we want to prevent partially trusted code from being able to write a custom sentence, and only leave it with the ability to write a default sentence selected from an index. In this situation, the WriteCustomSentence therefore becomes our protected resource. To achieve this, we need to:
  1. Mark the WriteCustomSentence method as SecurityCritical, so that SecurityTransparent code cannot access it.
  2. Mark the WriteDefaultSentence method as SecuritySafeCritical.
This second modification should sound a little strange; after all, the WriteDefaultSentence method is alreadySecurityTransparent, and so it can be accessed by other SecurityTransparent code. Our executable is alsoSecurityTransparent, so it can also access the SecurityTransparent WriteDefaultSentence method. However, you should note that the WriteDefaultSentence method uses the WriteCustomSentence method after a sentence has been selected.
The overall effect is that the SecurityTransparent WriteDefaultSentence method now calls a SecurityCriticalmethod: WriteCustomSentence. So, if we try to call WriteDefaultSentence from SecurityTransparent code, we’ll get an exception; let’s try to run our .exe without the second modification:
Figure 5. Running the demonstration .exe without marking the WriteDefaultSentence method asSecuritySafeCritical.
As we can see, the WriteCustomSentence method is now SecurityCritical, and cannot be accessed bySecurityTransparent code. You can find the exception associated with this behavior after the “Custom Sentence:” line in figure 5. To quickly recap, the WriteDefaultSentence method is SecurityTransparent, so the main method of the .exe can access it, but when WriteDefaultSentence tries to use the WriteCustomSentence method to write the output to the console, an exception occurs, as you can see after the “Default Sentence:” line in figure 5.
So, analyzing the each step involved in this demonstration, we have:
  1. The Main method calls WriteCustomSentence, which leads to an exception. A SecurityTransparent method cannot call a SecurityCritical method.
  2. (a) The Main method calls WriteDefaultSentence, which is successful. A SecurityTransparent method can call a SecurityTransparent method.
  1. (b) The WriteDefaultSentence method calls WriteCustomSentence, which leads to an exception. ASecurityTransparent method cannot call a SecurityCritical method.
If, as suggested in the second modification above, we mark the WriteDefaultSentence method asSecuritySafeCritical, we solve this potential problem. SecuritySafeCritical code is designed to act as a permission bridge, in that it can be called by SecurityTransparent code and it can, in turn, call SecurityCritical code. So, with this modification, we will create a bridge between the SecurityTransparent code (the Main method) and theSecurityCritical code (the WriteCustomSentence method). If we now run our .exe, we see this result:
Figure 6. Using SecuritySafeCritical code to bridge the permission gap between the Main method and the WriteCustomSentence method.
… Which is exactly the result we want to achieve. We have protected the WriteCustomSentence method (our custom resource) from the partially trusted assembly (which is SecurityTransparent code) while allowing the same assembly to access the WriteDefaultSentence method!

Inheritance and Override Rules

We’ve seen how resource protection works when one method calls another, but the security checks performed in these situations are not enough to achieve a complete set of security instruments. For example, we know that object oriented languages, such as those provided with .NET, allow inheritance and the overriding of classes, methods and types. So we need to protect those same objects with a derived version of the same inheritance structure. The new .NET Framework 4.0 Code Access Security system manages this need by using the following two rules:
  1. Derived types must be at least as restrictive as base types.
  2. Derived methods cannot modify the accessibility of their base methods.
Derived methods are SecurityTransparent by default and so, if the base method is not SecurityTransparent, the derived must be marked appropriately to prevent violating the first inheritance rule.
To demonstrate the two rules, we’ll write a CasWriter2 class that inherits from the CasWriter class, and will have a WriteCustomSentence method that inherits from the base WriteCustomSentence method (which we mark as virtual). The code for this will be:
namespace CasWriterDemo
  {
    
///
    /// Write sentences
    ///

        
public class CasWriter2 CasWriter
    {
      
///
      ///
      ///

      ///
          
public override void WriteCustomSentence(string text)
      
{
            base.WriteCustomSentence
(text);
      
}
    }
  }
 To demonstrate the first inheritance and override rule, we’ll set the CasWriter class as SecurityCritical:
///
  /// Write sentences
  ///

      
[SecurityCritical()]
      
public class CasWriter
… and, in the main method of the CasWriterDemo.exe assembly, we’ll substitute the CasWriter object with theCasWriter2 object:
static void Main(string[] args)
  
{
        CasWriter writer 
= new CasWriter2();
        
Console.WriteLine(writer.GetMethodsSecurityStatus());
So, we’ve tried to derived the SecurityTransparent CasWriter2 class from a SecurityCritical CasWriter class, but, with the first rule in place, this is not possible because we have tried to create a SecurityTransparent (low protected) type from a SecurityCritical (high protected) type. As a result, if we run our .exe we obtain:
Figure 7. An exception thrown from trying to derive a SecurityTransparent type from a SecurityCritical one.
As expected, a type load exception is thrown, stating that an inheritance security rule has been violated. Notice that the exe stop working as well; because the exception is detected when the assembly tries to load the CasWriter2 type, it’s not possible to handle the exception through code.
To make this as clear as possible, the following table sums up the inheritance rules for classes:
Base Class
Derived Class  
Transparent
Transparent
Transparent
SafeCritical
Transparent
Critical
SafeCritical
SafeCritical
SafeCritical
Critical
Critical
Critical
To demonstrate the second rule, we’ll remove the SecurityCritical attribute from the CasWriter class. In this case, the first rules is no longer violated as both classes are SecurityTransparent. However, there is a second issue to consider; we are trying to override SecurityCritical code (the base WriteCustomSentence) with what is nowSecurityTransparent code (the derived WriteCustomSentence), which is not allowed by the second rule. Remember that the derived method is SecurityTransparent by default, and we haven’t specified any other security attribute for it. Running the .exe, we therefore get:
Figure 8. Our new demonstration program violating the second Inheritance and Override CAS rule, and throwing an exception.
As expected, an exception is thrown saying that there is a violation on a security rule when overriding theWriteCustomSentence. I’ll leave it to you to mark the WriteCustomSentence method of the CasWriter2 class asSecurityCritical and verify that, in this last situation, all goes well. To try it, you can download the supporting zip file at the top of the page, which contains the entire example provided in this article. Before we finish looking at methods, let’s just confirm their inheritance rules:
Base Method
Derived Method 
Transparent
Transparent
Transparent
SafeCritical
SafeCritical
Transaprent
SafeCritical
SafeCritical
Critical
Critical
I’ll end this section by pointing that the same rules apply when we develop a class that implements an interface. The implemented method must respect the inheritance rules (the same as those in the table above) in relation to the attributes set for the interface members.  

The .NET Security Annotator Tool

In the previous example we saw how to mix the SecurityCritical and SecuritySafeCritical attributes to protect theWriteCustomSentence method from partially trusted code. Admittedly, that example was very easy and to set the correct attributes was a trivial task. Things are not so easy with more complex assemblies, and there is a risk of creating confusion as you try and unravel the security dependencies. This is precisely why Microsoft’s .NET Framework 4.0 provides a very useful tool, named .NET Security Annotator (SecAnnotate.exe), which can help developers to identify the correct attributes to use in theirs code. You can find it in the Microsoft Windows SDK version 7.0A, under the \bin\NETFX 4.0 Tools folder.
The SecAnnotate.exe tool browses an assembly to identify what modifications have to be made to avoid security exceptions when the assembly runs, and checks are made in several passes. In the first pass, the tool discovers what modifications must be performed on the assembly as it initially exists. If it detects that some code must be marked as SecurityCritical or SecuritySafeCritical, it performs a second pass, applying, at run time, the modifications discovered to be necessary in the first pass. The tool then makes a third pass, and if it detects thatnew modifications are needed as a result of the previous changes, it then makes these modifications in a fourth pass . The process repeats itself (scan – modify – scan – modify…), and ends when the tool doesn’t find anything left to change. At the end of the execution, SecAnnotation.exe generates an output report that contains the result of the analysis performed in each step.
There are two things you should bear in mind:
  1. If SecAnnotate.exe discovers that a method should be marked as either SecurityCritical orSecuritySafeCritical, it prefers the first attribute, it being a more secure option. Sometimes developers need to manually select the SecuritySafeCritical attribute instead of SecurityCritical, and this could generate problems during the following passes. We will see an example of what I mean in a moment. To avoid this, the SecAnnotate.exe tool comes with the /p:  command-line switch, which can be used to set the maximum number of passes that can be performed prior to stopping the execution and generating the output. In terms of a more tightly-controlled process which allows you to take direct and fine-grained control of your code security, It would be better to:

    1. run the tool with the /p:1 command-line switch so that, at each pass, a new output is generated;
    2. manually perform the desired modifications to the assembly on the basis of that output,
    3. recompile your assembly and
    4. re-run SecAnnotate.exe with the /p:1 command-line switch to obtain a new output, and repeat. The procedure ends when no other modifications are needed, as when you allowSecAnnotate.exe to run without human intervention.
  2. To perform the check, the SecAnnotate.exe tool has to verify how the assembly’s methods behave in relation to the methods that they call. Usually, assemblies use the .NET Framework base classes, and so checks regarding the attributes needed to call their methods can be performed.  If an assembly uses other (third party or your own) assemblies, different from those present in .NET Framework base classes (and, in general, from those contained in the Global Assembly Cache), the path to them must be specified Using  the /d:  command-line switch.
With all that in mind, if we return to our CasWriter.dll assembly, remove the security attributes which we set in the previous section and launch the following command from the console:
SecAnnotate.exe CasWriter.dll
…we will obtain the following output:
Figure 9. Running SecAnnotate.exe against our demonstration program.
The tool doesn’t find anything to annotate, because the assembly is made up of SecurityTransparent code that calls other SecurityTransparent code (specifically, those code of the .NET Framework base classes which we used).
But, if we want to protect the WriteCustomSentence method by marking it as SecurityCritical (as we did earlier), and we launch the previous command on the newly compiled assembly, we get a different result:
Figure 10. Getting SecAnnotate.exe to do some work on our demo .exe.
We can see that the tool found three necessary annotation and the jobs were completed in two passes.  Moreover, it generated a detailed report titles TransparencyAnnotations.xml (we can override the name with the /o:command-line switch), the contents of which looks like this:
Figure 11. The contents of TransparencyAnnotations.xml
We can quickly see that the SecAnnotation.exe tool made an annotation in the WriteDefaultSentence method, for three identical reasons. The rule violated is given by TransparentMethodsMustNotReferenceCriticalCode, as we expected. The three reasons are all identical because the SecurityTransparent WriteDefaultSentence method contains three calls to the SecurityCritical WriteCustomSentence method (inside the switch block of code).
Another important aspect of this report to take note of is that the tool suggests four different ways to avoid the annotation:
  1. WriteDefaultSentence must become SecurityCritical
  2. WriteDefaultSentence must become SecuritySafeCritical
  3. WriteCustomSentence must become SecuritySafeCritical
  4. WriteCustomSentence must become SecurityTransparent
If they could all, separately, entirely resolve the problem, we know that, for the goals we have in mind, the only available solution is to make WriteDefaultSentence SecuritySafeCritical, in order to grant access to it fromSecurityTransparent code, while leaving the WriteCustomSentence method protected.  We also know that the tool, after the first pass, applies the rule that it consider to be preferable as it performs its second pass, and that it prefers changes that bring about the best possible security situation.
In our example it might have chosen option number 1 and, as a result, the assembly would become fullySecurityCritical, and thus completely protected from SecurityTransparent code. This represents the more secure situation. However, we know that, for our goals, the solution that we need is number 2. Indeed, applying option number 1 instead of number 2 could bring about another round of checks that could have a totally different output, sending SecAnnotate.exe further and further away from our desired outcome. So, as I mentioned, we should probably use the tools with the /p:1 command-line switch, and make the changes manually.
We’ll end this section by running the SecAnnotatio.exe tool against the console application, just to see what happens. To do so, we need to specify the location of the CasWriter.dll assembly from which CasWriterDemo.exedepends. As seen earlier, to do so, we must use the /d command-line switch; assuming that the CasWriter.dll is contained in root of the D:\ drive, we need to run the following command:
SecAnnotate.exe /d:D:\ CasWriterDemo.exe
The output that we get is seen below:
Figure 12. Running SecAnnotate.exe against our console application.
We can quickly see that the tool has found only one annotation, which we can see in the accompanying report:
Figure 13. The SecAnnotate.exe report for our console application.
The annotation is related to the Main method, which is SecurityTransparent and is trying to accessSecurityCritical code. Note that this is not an exception, but rather the behavior that we wanted to implement in ourCasWriter.dll to protect WriteCustomSentence from SecurityTransparent code (such as the Main method). So, when using this tool, analyze the output generated with great care and attention.
Beyond all this, there is one last important point to consider. We have written two assemblies,CasWriterDemo.exe and its related CasWriter.dll assemblies, and if we want to use the SecAnnotation.exe tool to check the CAS rules for the entire solution, we simply cannot do it in a single step. In the last example, we analyzed the CasWriterDemo.exe assembly by specifying its referenced CasWriter.dll assembly. However, from the output that we obtained, it's clear that the checks were only made for the CasWriterDemo.exe assembly and how it behaves in relation to its dependent CasWriter.dll assembly - No check was made for the CasWriter.dllassembly (If you didn’t notice it at the time, the three annotations related to the CasWriter.dll assembly are not present in the later report).
The point I’m trying to make is that, if you want to check your entire solution, you need to perform the check for each assembly, one at a time.  Unless you have specific security goals in mind, the best way seem to be to check the dependent assembly first, and then its immediate callers.

Conclusion

We’ve covered a huge amount of ground in this article (much of which was set up and based on material in my previous CAS article, which you should read if you haven’t done so already). To start with, we’ve seen how to useAPTCASecurityCritical and SecuritySafeCritical attributes to set up a protection strategy when an assembly must be callable by partially trusted code. We have also seen that the work that has to be done to implement security strategies within this new model is not as easy as we might like, but, fortunately, the new SecAnnotation.exe tool can give us a great head start.
Let’s end this article with some reflections about how to set up a successful protection strategy when working with the new Level2 SecurityTransparence model. We can define two different situations  which we are likely to find ourselves in:
  1. Our assembly must protect the underlying dependent assemblies (for example, the .NET Framework base classes). In this case, we need to maximize the amount of SecurityCritical code. In this way, we are able to protect all the dependent assemblies from partially trusted (SecurityTransparent) assemblies with an impenetrable “wall”.
  2. Our assembly will be protected by its potential callers. If our assembly needs to be accessed by partially trusted assembly, we need to maximize the amount of SecurityTransparent code. If the assembly doesn’t make use of protected resources, we only need to mark it as SecurityTransparent, otherwise, we must use the APTCA attributes and try, method by method, to maximize SecurityTransparent code and minimizeSecuritySafeCritical code.
Of course, it isn’t so easy to guess the entire spectrum of possible scenarios and verify if the two rules above are applicable to all of them, so those two must be considered as general guidelines instead. In any case, we shouldn’t enter too deeply into this particular subject at this stage; partly because it is too complex to analyze succinctly, and partly because the Level2 SecurityTransparence model is, at the time of writing, a very new, and not yet sufficiently documented technology. I’d suggest that you follow the .NET Security Blog, which will surely, over time, bring you up-to-date about the new Level2 SecurityTransparence model and its implementations.