Tuesday, November 27, 2007

Adaptive Access Control

Few weeks back I went to attend Oracle Openworld in San Francisco and while I was on the demo grounds to see what oracle has to offer in the Identity and Access Control I met one product group which is building a "Adaptive Access Control" product.

This product builds the intelligence based on your previous access controls and compare them on next logon. This can be configured to make the metrics over a predefined period of time and freeze the statistics for next access requests.

For example if you are accessing the system between 8AM and 5PM on a daily basis and one day it gets the request at 10PM then it will deny the access.

This seems a good idea to me except for the reason that how long it takes to capture the metrics and how it handles the exceptions.

Another scenario could be that you access the system from North America reason and one day it sees the request from India then it has a valid reason to suspect the request.

Saturday, November 3, 2007

Open Source identity and Access management : VELO

Fortunately I got a chance to directly talk to Asaf Shakarchi (father of VELO) and I asked him why you named it VELO.

Asaf: It was taken by "velo binding". You can read about it in wiki.

My understanding: I then read about it in the Wiki and tried to link it with the identity and access control and realized the name is so true as the product is also trying to link and bind and control the identities.

I also asked him what is the function of remote performer and below is what I understood from his explanation

Remote performer is a kind of load balancer which can be used to delicate the responsibility of the VELO server. For example if the environment has many resources and you don't want to wait for the responses from the resource once you provision, you can use VELO remote performer. All the requests will be delegated to the remote performer and VELO server can perform other important tasks.

Remote performer is not a must for deploying VELO but it can give additional flexibility to distribute the load.

Monday, October 29, 2007

How To Break Web Software - A look at security...

I found below video on youtube and liked it. It is little longer in duration but worth watching.

Friday, October 26, 2007

Oracle database security and PCI DSS

Today I was browsing for data masking technology and products available in this space and found one good link on the oracle site.

This link explains each PCI DSS requirements in detail.

Thursday, October 25, 2007

Strong Authentication by biopassword

I watched many sci-fi movies where guys were playing around on the keyboard and trying to hack someones system by figuring out password etc. You already know that every individual has his/her own way/speed of typing on the keyboard. Biopassword has made a product on such grounds. There product captures the keystrokes and builds a pattern in which user key in the password etc. During authentication there software checks for that pattern and denies the access if the pattern does not match. I have tested the demo on there site.

You check it out by yourself and have fun.

SAML and desktop SSO

Today I was reading a post and found one good blog which i would like to share with you. PingIdentity has developed Integrated Windows Authentication toolkit to provide SSO to Google application.

If you have read my post onSAML where I discussed how Google apps are using SAML for federated authentication to there applications like gmail/gtalk used by corporates with there domain like abc@company.com hosted using gmail interface. In that post I have mentioned under section 1-a that if user is already authenticated then identity provider will not ask for the credentials again to the user but directly give access to the Google application like gtalk/gmail etc.

Below diagram shows my understanding of how it might be working. I am not showing anything related to pingidentity implementation for this approach but it is completely my understanding for the solution.

Microsoft GINA component can be customized to get the user credential and Microsoft has also exposed API to set the cookie for Internet explorer. Keep in mind that all things are going through the user browser so if user has a cookie to some domain (identity provider) it will be sent to the server by the browser.

Similarly Cookie can be deleted on the event of user logout. If the solution needs persistent cookie can also be set which will expire after the persistence time.

Tuesday, October 23, 2007

ProQuo Implementation of centralized identity

Many people staying in USA (Sorry for being location specific as this implementation is done for USA) are suffering with tons of junk mails coming to there mailbox (Physical mailbox). Here publisher/marketing organizations send lots of advertising material to the mailbox which (80-90%) directly been dumped into the trash box next to the mailbox. It consumes billions of dollars cost of paper and postage.

This new site is a good effort to stop this unnecessary mails coming to your mailbox. Once you login/register at this siteProQuo it will display all the publications which you are getting. After registration you can view your current publications by clicking on the dashboard link.

Below image shows how ProQuo might be getting the publication information's from different marketing organizations.

Once a person registers with his [name/email and address] this site can show how many publications you have. They might able to collect this information based on your identity (Name + Address).

When I registered on this site and tried to stop some of the publications I got stuck with some paper work (You have to take the printout of the PDF file which has the letter to stop the publication and post it to the publisher). Problem with this is that most of the people have at least 10-15 publications. Every publication post is going to cost 50 Cents so roughly 5$ per household to stop all the junk publications.

Identity Solutions:

I feel that the wired solutions can be built to stop the physical communication between the user and the publisher. Following could be some of the ways to implement this

1) Using SAML: user assertion can be sent to the publisher over the wire. This SAML assertions can also contain the encoded PDF letter which can be used as a proof at the publisher.

2) Using custom Cryptography: User's signed PDF can be sent to the web service running at the publisher when user want to stop the publication.

Share your comments about how it can be implemented.

Technorati : , ,

Friday, October 19, 2007

How Kerberos was evolved - Inside Story Part 2

In my last post I discussed few steps for the evolution of kerberos protocol implementation. In this post I am going to conclude it so read on...

Just want to recolect the problem statements where I left in last post so you get some continuity


1) Once tickets are generated at the user system and passed over the wire for service access they can be tapped over the wire. For example user got the encrypted ticket during authentication process and it was successfully decrypted, Now user will pass this ticket to mail server for mail access. At this time someone can tap the ticket and replay it later.

2) Tickets are good for ever. Once assigned no ticket session expiry.

For solving first problem Authentication Server (Ticket Granting Server [TGS]) will generate a sessionkey which will be sent to the user. TGS will also send a ticket encrypted with service password to the user. Below is the detailed structure

[SesssionKey:[*Service Ticket encrypted with service Key]]

* Structure of service ticket is [SessionKey:lifespan:issuing time:service name]

issuing time and lifespan are introduced so tickets can be valid for specific amount of time and not good for ever.

TGS will encrypt session key and ticket with user's password to solve the replay attack because password is only shared between user and Authentication Server.

Step by Step process for user authentication and service access

1) User enters the userid and KINIT program on the user machine sends that to TGS for authentication.

2) TGS server creates Ticket Granting Ticket [TGT] (this will be used to get service tickets)and send to the user by encrypting the whole content by users passwod.

3) User decrypts the TGT using the password to get the ticket. If someone spoof the packet then he will not be able to decrypt it as the password is shared between user and TGS only.

4) User wants to access his mail but user does not have mail service ticket so user contacts TGT for mail service ticket by submitting Authenticator,TGT, Name of the user and workstation address. This Authenticator with the session key sent by the TGS in the initial handshaking.

* Authenticator = {username:servie name: time stamp : lifespan}

5) TGS will get the session key of the user and decrypts the authenticator. After validating the identity it generates following and send by encrypting the whole content using the SESSION Key.

i) Ticket for the mail service. This ticket has the session key, service name, lifespan, timestamp and client address. This ticket is encrypted with the service password. (Even User can not decrypt it)
ii) Session key to be used to encrypt the conversation with mail service

6) User recieves the packet and decrypts the content using the session key (Given during first handshake with the TGS).

7) User prepares the Authenticator and encrypts it with the SESSION key (generated for the mail service communication in step 5).

8) Mail Service recieves the Authenticator and Ticket. Decryptst the ticket using it's password to get the session key. Using the session key decrypts the Authenticator. Validate the Authenticator and sends the acknowledgement to the user by encrypting the session key.

9) User receives the acknowledgement and start using the service.

How Kerberos was evolved - Inside Story Part 1

I always wanted to get something on kerberos (earlier name Charon) which can give me inside of the protocol like why,how, what etc...

In this post I am trying to explain this in some easy steps with problem and solution approach for the evolution of this protocol.

Below Image displays the first version of there discussion

Problem with the first version is that anybody can impersonate anybody easily so no security and accountability. There is no Authentication (NO PASSWORD) piece involved it is just the userid which you pass to get the service.

Below Image solves first problem by having a centralized authentication server. Every user and every service has a password which is stored in the authentication server named CHAROS.

Here is how it work

1) User Authenticates with the centralized server
2) Server authenticates the user and give him a TICKET (encrypted text which is encrypted with service password)

*encrypted text contains (Username and servicename and IP address of the user)
3) User passes this ticket to the service and service will verify user's identity to give access

Problems with this:

1) User has to authenticate every time user wants to use any service. For example user has got the TICKET for mail service but wants to print some mail then he has to again enter the password
2) Password is flowing on the network in clear text (people can capture the password on the network)

Above problem is solve by introducing another component called ticket granting service in the authentication service. It will grant the ticket which can be reused.

Another componet which is added at the system level is KINIT service which is responsible for handshaking with the authentication service. Here is how it works

1) KINIT service will take userid and password from the user and will pass the userid to the authentication service (it will NOT pass the password at all)
2) Authentication service will build the TICKET GRANTING TICKET and encrypt it with the user's password because it knows what the user password is.
3) KINIT will decrypt the token using user's password. If it is able to decrypt it successfully then user is authenticated.


1) Once tickets are generated at the user system and passed over the wire for service access they can be tapped over the wire. For example user got the encrypted ticket during authentication process and it was successfully decrypted, Now user will pass this ticket to mail server for mail access. At this time someone can tap the ticket and replay it later.

2) Tickets are good for ever. Once assigned no ticket session expiry.

Keep reading this because I will post rest of the story in next post...

Wednesday, October 17, 2007

Free Sun Identity Management training

Today while browsing blogs for identity management I end up Marina Sum's Blog. She has posted the free 2 hour course from Sun to introduce about identity management suite and it's offering.

I mentioned last time that it is available only in USA but I asked my friend in Australia and he confirmed that it works there too.

Check it out.

Tuesday, October 16, 2007

Cross IDM product integration

In my coffee time I use to discuss with my friends about the projects on which they are working and how they are solving the problems. It could be related to IDM/IAM or it could be any other technology. Few days back I met with my friend who is working with web methods to integrate different applications. While discussing about it back of my mind was thinking can we have something which can help us integrating multiple vendor products related to IDM / IAM space?

I think yes, that kind of framework can be built using the SAML and SPML specification integration framework. At the root level these specifications talks XML language over HTTP/S channel. If we can build a IDM middleware component which can understand one product language (XML Schema) and convert that to another product language then our purpose is solved.

It looks like OpenPTK is one such open source project on which Sun is working on. They recently released latest version of it. It is providing multiple interfaces to connect to Sun IAM suite (Identity and Access Control suite). But I think this is the starting as this version of OpenPTK does not support cross product integration.

OpenDS (open source directory server)

I am always deep lover of Directory Server and technology behind it. I was very happy when I saw that Sun (I have worked mostly with Sun LDAP and I love it) is aggressively working on a open source directory server project OpenDS which will take the directory server to new levels. I was reading about the intension of this project and why they have not supported the current open source projects like openLDAP,Apache directory etc and the reason they feel is because they want to redesign it based on current client's requirements.

Below are the features in the openDS:

Performance Lots of features are important, but performance is almost always near the top of the list. It needs to be extremely fast, outperforming all other servers wherever possible.

Upward Vertical Scalability. It needs to be capable of handling billions of entries in a single instance on appropriately-sized hardware. It should be able to make effective use of multi-CPU, multi-core machines with hundreds of gigabytes of memory.

Downward Vertical Scalability. It needs to be capable of running adequately in low-memory environments so that all essential components can be functional on edge devices like cell phones and PDAs.

Horizontal Scalability. It needs be possible to use synchronization to achieve higher levels of read scalability by adding servers to the directory service. In addition, it needs to be possible to use data distribution in conjunction with synchronization to achieve horizontal read and write scalability to achieve deployments into the billions.

Supportability. The server should be easy to support and maintain. Administration should be intuitive, and wherever possible the server should provide sufficient information and notifications to enable corrective actions, even predictively.

Security. The server must provide extensive security in areas like access control, encryption, authentication, auditing, and password and account management.

Extensibility. Virtually every aspect of the server should be customizable. It needs a safe and simple plugin API that delivers additional points of extensibility, including, but not limited to, password validation algorithms, password generators, monitor information providers, logging subsystems, backend repositories, protocol handlers, administrative tasks, SASL mechanisms, extended operations, attribute syntaxes, and matching rules.

Synchronization. The server must support data synchronization between instances, including not only total data synchronization but also partial synchronization (with fractional, filtered, and subtree capabilities), and must also provide a means of synchronizing with other applications and data repositories.
Availability. The server must be robust enough to continue running properly even if serious errors are encountered.

Portability. The server needs to be written entirely in Java so that it can run on any platform.

Reliability. A directory service is one of the most critical components of a business infrastructure. It is absolutely essential that the service function despite hostile or unexpected events and that the data it delivers be trusted.

Compatibility. The Sun Java System Directory Server will continue to be maintained over time and will not be immediately replaced by Sun products based on OpenDS. However, OpenDS must provide support for virtually all existing features of the Sun Java System Directory Server. Migration from other directory server implementations should also be taken into consideration when applicable.

Monday, October 15, 2007

PCI Compliance

Next buzzword of the coming year going to be PCI DSS (Payment Card Industry Data Security Standard). Since lots of identity theft and sensitive information is being attacked now a days it is very important to secure the MONEY and where ever in any form MONEY is :-)

Main goal of this compliance is to protect credit/debit card information where ever it is:
1) Issuing Banks
2) Online merchants like Amazon (They also store the credit card information)
3) Utility companies like your electricity provider. Now a days every one and mostly everyone accepts the online payment.
4) Even the merchants in the local store should comply to this standard because they are also playing with the card information's. If the merchant is dealing with debit card and PIN combination then they are exempted for PCI compliance.

Companies needs to get the confidence of the customers to provide the sensitive information (credit card information) for any kind of payments.This compliance standard is going to help the industry achieving the same.

Below are the PCI Data Security Standard Control Objectives & Requirements

I. Build and Maintain a Secure Network
1. Install and maintain a firewall configuration to protect cardholder data
2. Do not use vendor-supplied defaults for system passwords and other security parameters

II. Protect Cardholder Data
3. Protect stored cardholder data
4. Encrypt transmission of cardholder data across open, public networks

III. Maintain a Vulnerability Management Program
5. Use and regularly update anti-virus software
6. Develop and maintain secure systems and applications

IV. Implement Strong Access Control Measures
7. Restrict access to cardholder data by business need-to-know
8. Assign a unique ID to each person with computer access
9. Restrict physical access to cardholder data

V. Regularly Monitor and Test Networks
10. Track and monitor all access to network resources and cardholder data
11. Regularly test security systems and processes

VI. Maintain an Information Security Policy
12. Maintain a policy that addresses information security

Sunday, October 14, 2007

Cross Site Scriptting

Cross Site Scripting is a web application attack which is also known as code insertion attack. It is basically of two types

1) Insertion of script in the server side, for example in the database content. Hacker may insert the script in the database by using forum postings etc. These scripts gets executed when user browses that page which uses the content having the hidden scripts.

2) Insertion of script content in the URL. Hacker may send these URLs by the channel of email. When user gets the email and clicks on the link script gets executed on the system and will become active when user enters the credentials to get into the site. Script captures the credentials and post those details to the hacker site behind the scene. Some times attackers uses URL encoding/ UTF encoding / ASCII representation of the URL parameters so user think it as a genuine data and clicks the URL.

NEVER click the bank URLs coming in the email. Generally bank's don't send there site URL in the emails.

I liked the White Paper which talks about this attack in detail. It talks about how these attacks are exploited and what can be done to prevent these attacks. You can also visit XSS Faqs section for further reading.

Google also had the XSS vulnerability which was fixed earlier this year.

Technorati :

Saturday, October 13, 2007

SAML in Action

Today I was browsing the net to find more on SAML and came across a good implementation of SAML by google. In this post I am going detailed out my understanding of SAML implementation by google.

Security Assertion Markup Language(SAML): It is used in a federated environment where TRUST is needed between service provider (like google gmail, gtalk etc) and identity provider (any organization using google applications). Google is providing it's services like gmail, gtalk etc for enterprises who wants to use these applications with there corporate domain. To enable these services and provide Single Sign On between corporate network and google application network SAML can be used.

Below diagram shows four main entities used for SAML implementation

  1. Actor : User using the application hosted at service provider

  2. Service Provider : In this case Google exposing it's services like gmail, gtalk etc.

  3. Identity Provider : Corporate using the google services.

  4. Repository : Store for holding the user base for the identity provider.

Step By Step Flow of SAML in Action

1) User accesses the service hosed by the service provider. There could be two different scenarios

a) User is already authenticated at the corporate (identity provider) :If user is already authenticated, identity provider will set the session in the cookie. This cookie will be passed to the user when service provider redirects the user for SAML assertion. Identity provider will validate the cookie and pass the user information in the assertion to the service provider without asking user to enter the credentials.

b) User is directly accessing the service without logging into the corporate portal: If the user is accessing the service without logging into the corporate network user will be asked to enter the credentials when service provider request for an assertion at the identity provider.

2) Service Provider requests for an assertion: As we know that SAML request and SAML response (assertions) are sent using XML over HTTP/S, it is mandatory to define the XML structure in which service provider will request for the assertion and identity provider will respond with an assertion. Google has also defined such schema's. Google is also using the RSA/DSA key pair (public/private) to validate the assertion sent by the identity provider. Corporate using the SAML SSO needs to generate the key pair and register the public key with google assertion validator (This is the component residing at the service provider [in this case google] to extract the user information from the assertion).

3)Identity Provider generates and sends the assertion: Identity provider will authenticate the user based on two cases mentioned in step 1 and passes the userid to the service provider.

Since we can not pass directly XML documents as the request parameters (GET/POST) over HTTP it is required to encode the SAML request and SAML responses.

Samples of the SAML request/responses

SAML Request XML format

<?xml version="1.0" encoding="UTF-8"?>

Encoded Request passed to the identity provider


SAML Response Generated at the Identity provider

<?xml version="1.0" encoding="UTF-8"?>
<Signature xmlns="http://www.w3.org/2000/09/xmldsig#">
<CanonicalizationMethod Algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315#WithComments" />
<SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#dsa-sha1" />
<Reference URI="">
<Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature" />
<DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1" />
<samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success" />
<Assertion ID="dojnoaponicbieffopfdecilinaepodfimmkpjij"
<Issuer>https://www.opensaml.org/IDP </Issuer>
<NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress"> demouser </NameID>
<SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer" />
<Conditions NotBefore="2003-04-17T00:46:02Z" NotOnOrAfter="2008-04-17T00:51:02Z"> </Conditions>
<AuthnStatement AuthnInstant="2006-08-17T10:05:29Z">
<AuthnContextClassRef> urn:oasis:names:tc:SAML:2.0:ac:classes:Password </AuthnContextClassRef>

Technorati :

Technorati :

Friday, October 12, 2007

How to View cookies in browser

Often a times we have to test what cookies are set on the browser during testing a issue. Everyone has figured out there own way of doing this. I would like to share the way I do this during troubleshooting.

1) You can use javascript:document.cookie on the browser window to check the cookies. Problem with this is that the content of the page is gone and you have to reload the page to see the page content. Use second approach to solve this problem.

2) Alternate to this approach is to use the javascript ALERT as javascript:alert(document.cookie). If there are one or two cookies this is fine but if number of cookies are more then reading the alert box becomes a pain. Use third approach to overcome this problem.

3) Use javascript:alert(unescape(document.cookie).replace(/;/gi,"\n\n")) to get the cookies displayed separately.

Thursday, October 11, 2007

RSA Conference 2007

I got an opportunity to attend RSA conference here in Savannah (Hilton Island). It was more of a partner oriented conference but they talked about there product line and gave some of the new features of the products which are going to come in the market in next fiscal year.

I attended seminars on following products:

1) Authentication Manager 7.1 (Will be in the market in Mar-Apr 2008)
This product main focus is on Strong Authentication (one time password using hardware and software tokens). Software token is the good feature where in if the person forgets to carry the hardware token he can get the software token using self service. User has to answer security questions and software token will be sent to the user on either SMS or in a email.

They are also working on putting the software in the mobile phones for generating the token instead of carrying the separate device.

While attending the conference I asked how the software works on the cell phone? Is it the same software for all the users or is it user specific? I know these are silly questions but I never hesitate to ask questions so I did. They told me that software is same for everyone but the SEED is person specific and they also told me that one SEED can be assigned to three devices at the max at a single point of time. For example same SEED can be given on persons cell phone and same can be used for the hardware token also.
More on Authentication Manager in later posts after I can recollect my understandings.

2) RSA Key Manager

This product helps in managing the symmetric and asymmetric keys in the organization. It can be either for RSA products or it can also be used to manage encryption keys for home grown applications.

3) File Security System

RSA wants every content to be written in a secure manner and should be accessed by authorized persons ONLY. That is the reason they accuired a company who were in this area. This software works for putting the file contents in a secure manner and you can protect whole folder. You can then assign persons who can access those files and folders in a secure manner.

4) Database security System

This product secures the content of the database at a column level.

I will talk more about the inside of these products in the comming postings so keep reading.

Monday, October 8, 2007

Federated Access Control

Generally I hate do a copy paste of the content from other sites but I liked this one a lot (because of cute animations) so thought of sharing the same with my friends watching my blog.

Sunday, October 7, 2007

DSML Introduction

Directory Services Markup Language (DSML) provides a mechanism for accessing directory servers as XML document. It evolved with version 1.0 which just gave flexibility to read the directory servers but version 2.0 came with more flexibility. Using version 2.0 directory server entries can be updated.

DSML functionality is motivated by scenarios including:

1) Hand held device which does not have LDAP client.
2) Applicaion programs developed using XML tools and application needs to access LDAP.
3) A program needs to access a directory across the firewall and firewall does not allow LDAP ports.
Click on the below image to view the animation.

4) Integrating two different vendor directory servers.

Most of the existing directory server vendors support the DSML but it needs to be enabled explicitely. Below are some links specific to Sun DS.

How to enable DSML support

Accessing Directory using DSML

DSML tools are also available for integration with existing applications.

Technorati :

Friday, October 5, 2007

How to enable special character support on Sun Directory Server?

Organizations in today's world are spreaded across the globe and they want to have a support of all the character set in the applications so there user's can access them at any part of the globe in there local language.

To enable Special character support in Sun DS follow below steps

1) Login into the directory manager console
2) Open the instance on which you want to enable special character support
3) Go to Configuration Tab
4) Expend the plugins
5) Select "7 bit check" plugin and disable it. By default it is enabled.
6) Restart the server and now you are all set to dump special characters in the Directory Server.

Same activity can be done using the command line but I am not posting that part here.

Thursday, October 4, 2007

Servlet Filter and SSO

Servlet Specification 2.3 came with many new features. One of such feature which is used for Single Sign On (SSO) implementation is Servlet Filter. Below image gives an outline how servlet filters can intercept request and response headers before content is reached to the destination.

Servlet Filters can intercept incoming request and check the header variables, It can also make dynamic decisions after validating the HTTP Headers.

How Servlet Filters can be used for SSO:

Agents responsible for providing SSO functionality use Servlet Filters to intercept every request before it hits the destination. Filter intercepts the session header variable in the request and checks against the Session Manager of the Access Control product. If the Session coming in the request is valid then it passes the requst to the other component of the agent for further processing like role validation etc. If the Session is not valid then User will be redirected to the login page. In the below diagram I tried to detailed out some of the J2EE agent components.

Tuesday, October 2, 2007

SSO vs centralized authentication

Single Sign On (SSO) provides a mechanism in which user is authenticated only once and then given access to various applications. Behind the scene it might be possible that systems are authenticating/authorizing the user with different systems. SSO works mainly with web based applications. We can not (easily) achieve SSO for logging into Unix servers or non web based applications.

Centralized authentication helps user in such cases. User will have to enter the credential multiple times but they will be same. All the applications will authenticate the user against the centralized repository (like active directory or another LDAP or common database). Since all the applications are authenticating user against one server user don;t need to remember multiple credentials. Also once user account is terminated in the centralized system user can not access any of the application user has access to.

Inside Active Directory Password Filter DLL

I had posted a blog How to get plain text password in Active Directory??? to give some inside of how to write password filter DLL.

In this post I am going to talk about some issues and precautions should be followed while writing these components.

Like a user account Active Directory maintains computer accounts in it's repository. It enforces the same password policy which is applicable for user accounts. For example if there is a policy to change the password after every 30 days for the user accounts same applies to the computer accounts stored in Active Directory. This policy can be disabled for computer accounts but that leads to some security issues over the network because these accounts (computer accounts) are used to provide some kind of internal authentication when you access shared resources over the network like printer or some shared folder etc.

Computer accounts password change notifications should be ignored while passing the credentials to the identity management(IDM) products ( I am assuming that you know why this is done???) for synchronization. If password filter DLL will send these computer password change notifications to the IDM products it will not be processed as these products are not managing computer accounts.

Every computer account has an attribute named samaccountname which has a DOLLAR ($) character at the end. Password filter DLL should check if the sAmAccountname of the event is terminating with DOLLAR then it should skip the whole process. This provides two benefits
1) Increases the efficiency of the password filter DLL as it is skipping the processing of unnecessary accounts.
2) Better utilization of network bandwidth.

Hope this tips help you building more efficient password filter DLL.

Friday, September 28, 2007

End Dating an account in Linux

Often times we want to terminate an Linux account for security reasons. Below command can be used to do the same

/usr/sbin/usermod -e

This option (-e) can also be used when account is originally created. This gives extra security if you already know when the person will be out of the project.

Keep in mind that 70-80 % of hacking is internal in the organization.

Thursday, September 27, 2007

Strong Authentication

In today's world when more and more applications are getting consolidated in enterprise portals we have to have a solution to protect secure applications with extra layer of protection. Strong authentication or multilayer authentication is the solution for this problem. Let me give a brief introduction about different ways in which we can authenticate ourselves

1) What we Know : UserId, "secure PIN", Password, Security question/answer

2) What we Have : "Secure token", "Bank ATM card"

3) What we Are : Biometrics

Normally we use userid/password combination to get into any secure site. That authentication mechanism is secure but only to certain extent.

Take an example where you are authenticating to a bank site using userid/password combination and someone on the network (Man in the Middle) captures your userid/password. He can use those credentials to get into the banking network .....(you know what I am thinking here). It seems to be very complicated on the board but people have done it and our goal is to reduce those attacks to the maximum possible extent.

If we introduce one more layer of authentication in the authentication logic by adding 2) or 3) from the above, we will make the system more secure because Man in the Middle will not have what you have and definitely he can not be You. This is called strong authentication.

There are various ways to introduce second factor, one such mechanism is One Time Password (OTP). In OTP system generated password can be used only once so even if that password is trapped over the network it can not be used again.

Refer to RFC for getting more inside of OTP. There are many products available in the market for OTP. Apache is also working to build one such system under the name of triplesec http://directory.apache.org/triplesec/

Technorati : ,

Wednesday, September 19, 2007

Virtual Directory pros and cons

Virtual Directory is a concept in which application/product providing this functionality does not store any data. It is just a representation of the global data spreaded across an organization into a single place. These products can aggregate the data from directory servers, databases, data coming from applications over web service interface etc. Data coming from the Databases can also be exposed over an LDAP interface.

Main feature is to get the data and provide the consolidated data to the end application. Advantage of this is to reduce the calls to individual systems if they are storing different set of data. For example if Directory Server has user information and Database stores the role information then a common view can be exposed to the end application via which end application can have access to the whole data in one request. Applications don't need to make individual calls to different repositories to get the data.

Keep in mind that Virtual Directories does not store any data on it's own and it does not update data change from one system to the other system. There job is just to gather all the data under one roof for easy access to the applications.

Another advantage of Virtual directory could be to restrict the access to the data. If you give control of the organization directory server to an application then you can not restrict the application to read only first and last name of the person. If application has the access it has access to all the attributes. With Virtual Directory you can restrict access to the data you want application to see.

Interface: Virtual Directory products expose there propritory API's which applications can use to have access to the systems. Since Virtual directories does not store information locally there could be performance issues getting the data from the backend repositories.
Caching problem: If someone says that to overcome the performance issue we can cache the data locally at the virual directory then next question would be how to synch the data which is updated at the backend repository by another application?

Saturday, September 15, 2007

Complexity of Session Timeout

One of our application team was using the centralized access control product for securing the access to the application. Users of the application team started reporting the timeout issue from the application after some time. We thought that increasing the ideal/session timeout settings should solve this problem. We increased both the values to hours just for testing that if those settings are the culprit. But making that change has not changed the situation. Our next suspect was the application code so we asked the application team to check if code is doing any session timeout after some time. They also confirmed back that is also not the case. We approached the application deployment team and requested them to make the change in the session timeout parameter from the application server. As soon as they changed the session timeout value to unlimited problem was resolved.

My question to myself was how can that application server parameter be responsible for timeout and redirect the user to login page? I read the servlet specification (version 2.3) for this issue and here is what I feel should be happening in this case.

1) User accesses the application page for the first time.

2) Since access control product is used to protect the application user is redirected to the access control login page.

3) User authenticates successfully and redirected to the application.

4) Access control product agent is using the session variable (JSESSIONID) as a key to cache the user details like session expiry, ideal time out etc.

5) User access the application again. This time JSESSIONID is passed from the browser as a cookie to the application/web server. This JSESSIONID is used as a key to look for session information in the access control product agent cache.

6) After the JSESSIONID expiry at the application server, new JSESSIONID is generated and the cookie value is overwritten.

7) User accesses the page again but this time new JSESSIONID value is passed to the application/web server. This new value is again compared in the cache. It will not be found in the cache. Agent will treat this request for a new user and redirects the user to login page.

What value for the session expiry should be set:

For ideal condition JSESSIONID value should be set as the same value for the session timeout from the access control product. If the access control product has session expiry as 1 hour and ideal time out as 30 minutes then application server session expiry should be set to 1 hour.

Technorati :

Magic Of HTTP Header Variables

Last week I attended one vendor presentation in which presentor was talking about how there product can be installed on the application servers to bypass the login mechanism. In this approach application login page needs some modification. I understood what he was talking about and thought to share this with my friends reading my blog.

Product must be using the magic of HTTP header variables and here are few steps which can be used to make it working

1) As soon as user logs into the centralized access control product it will set a browser coookie for the user.

2) User clicks on the application URL, product agent is sitting at the application/web server.

3) Browser will pass the cookie to the product (Assuming all the applications are running in single cookie domain).

4) Agent will intercept the cookie token and will verify the validity (session timeout/idealtimeout etc) with the centralized server.

5) After validation agent will set the predefined HTTP header variable's in the request and forward the request to the application/web server

6) Application will check if the required header variables are present in the HTTP request or not.

7) If the required header variables are found in the request and internally authorized by the application authorization module requested page will be served to the user. Custom header variables can contain information like username, employ number, job code etc which can be used to decide the authorization rights.

8) If the required HTTP header variables are not found in the request user will be redirected to application login page.

There are many HTTP header variables which are part of the HTTP standards but custom HTTP header variables can be defined based on the requirements. One thing to note in the above scenario is that every time a new page is requested by the user same HTTP header variables needs to be passed to the application as the life of the HTTP header variables is limited to single request.

Technorati :

Thursday, August 23, 2007

How Oracle ERP stores password

Most Oracle Applications 11i implementations are vulnerable to a significant security weakness in the encryption of passwords within the application where an insider may be able to circumvent all application controls by accessing any application account or obtain the APPS database account password.

The fundamental issue is that the Oracle Applications 11i application account passwords are stored in the database encrypted using the APPS database password as the encryption key rather than using a strong, one-way hash algorithm.

Oracle Applications 11i stores passwords in two tables: FND_USER and FND_ORACLE_USERID. The FND_USER table stores application user account passwords and the FND_ORACLE_USERID table stores internal Oracle Applications database account passwords. Both tables use the same encryption algorithm to protect the passwords.

FND_USER table has two columns which stores the password.

1) ENCRYPTED_FOUNDATION_PASSWORD --> stores APPS pasword encryped using userid/password as a key.
2) ENCRYPTED_FOUNDATION_PASSWORD --> stores the user password encrypted using APPS password as a key.

If you know the username/password and you have access to fnd_user table and know the java code which is used for encryption then you can get the APPS password of that system. And if you know APPS password you can get anyones application password.

Both the above mentioned columns can also store following values

1) External --> This is set if the oracle application is using the external system like "directory server" for authentication.

2) Invalid --> For oracle internal users this value will be set.

3) ZG_error (error message) --> If encryption fails due to some reason then error message will be stored in this column. User will not be able to authenticate in that case.


The general Oracle Applications login process involves the following general steps -
1. The ENCRYPTED_FOUNDATION_PASSWORD and ENCRYPTED_USER_PASSWORD are retrieved from FND_USER if the account exists and is active.
2. The APPS password is obtained from ENCRYPTED_FOUNDATION_PASSWORD by using the username and password as the decryption key.
3. The user password is decrypted from ENCRYPTED_USER_PASSWORD by using the APPS password as the decryption key.
4. The decrypted user password is compared to the entered user password.

If APPS password is changed then all the stored passwords are re-encrypted with the new APPS password. APPS password should be changed using either "Oracle Users" form or FNDCPASS utility.

Thanks to : http://www.integrigy.com/security-resources/whitepapers/Integrigy_Oracle_Apps_Password_Issue.pdf

Technorati :

Thursday, August 16, 2007

while equivalent for loop

Today while doing some programming stuff I found one good thing which I never encountered in the past for programming language loops. I know everyone who has done programming used for and while loop some time or the other.

In this post I am just going to give simple example to convert a for loop into while.

For loop



While equivalent



int i =0
for(System.out.println("initialization");i <10;System.out.println("increment/decrement"))

System.out.println("Inside loop");

equivalent while loop:

int i =0;

System.out.println("Inside loop");

System.out.println("increment/decrement"); // this should be the last statement in the loop

Friday, July 20, 2007

Who rebooted the Linux system

The availability of details is depended on the syslog's settings, but in any case you can do following:

1. Get the boot time. You can get it by couple of ways, as you can type "uptime" commands and count back for how long it was on, or you can go to
/var/log and see the boot.log file, or in the same directory see "messages" file and look for "syslog started" time stamp.

2. type "last" command and see who were the uses logged in at the time when system had been rebooted

3. See these users shell history files in ~username/.bash_history for su or sudo commands.

All the aforesaid makes sense ONLY if you have proper access to root account and no one but root user knows the root's password. If you guys share the root password it is almost impossible to find who had rebooted the system. The only chance if you had systlog set to record network events. You can see in /var/log, messages and security logs for connections with a time-stamp kept alive around the reboot. Given your DHCP is long leasing or static IPs were used/or logs entries resolve DNS you can get the list of suspects. Then you proceed to step 3.

Have in mind that if someone INTENTIONALLY reboot the system and had complete root access and posses some skills, it is not only impossible to track, he/she may forge logs in any desirable way.


Curtsey : http://www.unix.com/unix-for-dummies-question-and-answers/27272-how-to-identify-who-rebooted-the-linux-server.html

Tuesday, July 17, 2007

How to get the oracle version

Some times we comes across in the situation where we have to find which version of Oracle is running. Here is a simple query which you can run to find the exact oracle verison


I had a strange scenario where I had to connect to different version of oracle instances from a single Web app. Below oracle site helps finding the driver which can match those versions driver requirements.


Thursday, July 12, 2007

can chmod command be dangerous???

We all think that chmod is there just to help us and it can never do any harm to the system. But hey wait for a sec... too much of a permission can be dangerous too. I came to know about this today when one of my team member has just ran the below command on one of our Linux servers.

chmod -R 777 *

As soon as this command is executed Linux system has suspected something wrong has happened and it stopped serving any service. I was not even able to do ssh to the box. I had to change the permissions (reduce the permissions) on the files to make the system ssh working.

Wednesday, July 11, 2007

User permission management in Linux

We have been doing identity management and access control for a long time now but few times we don’t put much of restrictions on our development team keeping in mind that they are our friends. I believe that is true but still I feel that it is very necessary to define fine grained access control to all the people.

In this post I am going to discuss some of the very common and simple Linux user management tasks.

1) Adding a new user to Linux (useradd)

-d home directory
-s starting program (shell)
-g (primary group assigned to the users)
-G (Other groups the user belongs to)
-m (Create the user's home directory


useradd -gusers -Gmgmt -s/bin/shell -d/home/roger -m roger

2) Modifying existing user (usermod)

-d home directory
-s starting program (shell)
-p password
-g (primary group assigned to the users)
-G (Other groups the user belongs to)


usermod -Gothers roger

3) Deleting a user (userdel)

-r (remove home directory)


userdel -r roger

4) /etc/passwd is the file which keeps User names and primary groups. Format of the file is

User name (normally all lower case)
Password (encrypted - only contains the letter 'x')
User ID (a unique number of each user)
Primary Group ID
Comment (Normally the person's full name)
Home directory (normally /home/<user name>
Default shell (normally /bin/bash)

Each field is separated by a colon.

5) Password for each user is stored in /etc/passwd file

6) Group information for the user is stored in /etc/group. Format of this file is

Group name
Group password (hardly ever used)
Group ID
User names (separated by commas)

Note: Do not edit this file directly. Edit the user using the command usermod which will directly modify this file.


As I mentioned earlier you don’t want users to use a shared account. Sudo is there to help us achieving this task. I am going to give some simple usages by which this can be used

1) Sudo permissions are stored in the file /etc/sudoers

2) Never edit the file using vi. Use visudo to edit the file.

visudo -e -f /etc/sudoers

3) Add the users into group for which you want to assign sudo permissions. This way sudo file will look clean.

4) Enable sudo logging by putting below text in sudoers file

Defaults logfile=/var/log/sudolog

There is a lot more which can be done using sudoers but here I am to give real life usable things not to put man pages of linux. Please use man page if you want more :-)

Tuesday, July 10, 2007

Single Sign On - Reduced Sign On

SSO provides flexibility to the user so they don't have to enter the credentials again and again for accessing different applications. Every one of us are happy about it but there is a side effect of this solution. For example if you are logged into the system which works on SSO and does the SSO to the payroll site. What will happen you signed into this portal and went for a cup of coffee with your friend and forget to lock the system. Your neighbour who is always intrested to know how much you are earning gets a chance to move his chair to your desk and get that information quickly.

This was just an example, there could be multiple secure applications which reside in enterprise portal and are critical for you. That's the reason organizations are adapting the concept of Reduced Sign On.

Reduced Sign On: This concept handles the above scenario by prompting another set of verification when you try to access critical applications. This extra layer of authentication could be any one of below list:

1) Challenge Question

2) Digital Certificate

3) Hardware Token number

4) Smart Card

5) Biometrics

Reducing users' sign-on complexity problems requires a balance between user satisfaction and security. If the scale swings too far toward security when trying to prevent a breach, user satisfaction decreases. Similarly, if the scale swings toward user satisfaction, you can compromise IT security.

Importance of Time Server in SSO environment

Once I deployed an SSO agent on a client location and was scratching my head for weeks to find what is that which is stopping the SSO to work properly. I checked my configurations hundred of times installed the SSO agent on other server with same configuration and it works there but what the hack is going on with this machine which is creating this issue. I checked the OS patch and everything but still NO LUCK !!!

In this article I am going to talk little bit about the root cause of that.

Time Server: SSO Token contains a time stamp which is generated by the server to check the session timeout. My server was residing on a box which was having a time stamp say T and my agent was residing on a box which was having a timestamp T+30 min. Session expiry was 30 minutes.

That's why whenever my agent box gets the SSOtoken and validates the token it always gets the token which is expired.

In the SSO environment please make sure all the servers are having time clock synchronized otherwise you may also have this tricky to debug situations.

Types of attack on Password

It always seems to be very simple when we type our credentials to get into banking sites to do some transaction or commercial sites for purchasing some stuff but in this post I am trying to explain some of the types of attacks on password which can make you bankrupt, I am no kidding read on:

1) Hardware Device: When we talk about hardware we think that it will take time to install and only experts will be able to use this attack but NO this hardware device is very simple to install and can be installed by a kid in 10 seconds or less. See the below image to get an idea how simple it could be. Criminals have installed this device on bank machines to get the bank credentials which has costed millions of pounds to banks. Students installed them on there teachers system to get an access to the exam papers. there could be lot of other instances where these simple plugs can be installed and exploited.

2)Software Malware - Keyboard logger: We all enjoy free stuff and now and then we tend to use free softwares available on the internet. These softwares can save couple of dollars in your pocket but may cost you a lot. Imagine a scenario in which you down-loaded some free software and that software has a malware which modifies the OS kernel to get your credentials when you login to the system. These malwares can also capture the credentials when you access different banking sites. They can store these passwords locally and send them to there servers where attackers can use your bank credentials to transfer money to there bank or play in casino. Be very careful when you use free softwares.

3) Dictionary Attack: We all use dictionary and know that most of our password comes from one or the combination of words from dictionary. Yes you got me what I am going to talk. Smart people have written softwares which can be hooked in a PC to try all the possible combinations of password on a system. One solution to this problem which many organizations/banks have already implemented are locking the account after N number of unsuccessful password attempts.

4) Social Engineering Attack: If you get a call from a person saying that he is from the security team and they got an alert that your account is having some problem and it may lead to delete all of your data from the box. They can fix that for you if you can just let them know your account password, there are chances that some of us will agree to this and simply give it. I have seen many organizations where users give there choice of password to the help-desk persons and ask them to reset there password to the one they want. Users don't realize that this can give open door for other person to access to the secure stuff they should not be looking into.

5) James Bond Attack: Research institutes are challenging that they can listen to the keystrokes and guess the users password with 90% of accuracy. This is one of the reason that very confidential rooms does not let even a single voice go out of the room.

Monday, July 9, 2007

Active Directory PDC vs FSMO

Today I faced strange issue in my environment which forced me to read about FSMO. Let me briefly give an idea about the problem : We have enabled Bi-Directional password synch which requires an agent to be installed on the Active Directory (AD). In some cases when user changes the password in the microsoft way (CTRL+ALT+DEL) screen just hangs. While troubleshooting my AD team told me that the agent is installed on FSMO and I had no idea what the hack he is talking about so I read about it and thought of posting the same here.

FSMO stands for Flexible Single Master Objects

Windows 2000 Multi-Master Model

A multi-master enabled database, such as the Active Directory, provides the flexibility of allowing changes to occur at any DC in the enterprise, but it also introduces the possibility of conflicts that can potentially lead to problems once the data is replicated to the rest of the enterprise. One way Windows 2000 deals with conflicting updates is by having a conflict resolution algorithm handle discrepancies in values by resolving to the DC to which changes were written last (that is, "the last writer wins"), while discarding the changes in all other DCs. Although this resolution method may be acceptable in some cases, there are times when conflicts are just too difficult to resolve using the "last writer wins" approach. In such cases, it is best to prevent the conflict from occurring rather than to try to resolve it after the fact.

Windows 2000 Single-Master Model

To prevent conflicting updates in Windows 2000, the Active Directory performs updates to certain objects in a single-master fashion. In a single-master model, only one DC in the entire directory is allowed to process updates. This is similar to the role given to a primary domain controller (PDC) in earlier versions of Windows (such as Microsoft Windows NT 3.51 and 4.0), in which the PDC is responsible for processing all updates in a given domain.

The Windows 2000 Active Directory extends the single-master model found in earlier versions of Windows to include multiple roles, and the ability to transfer roles to any domain controller (DC) in the enterprise. Because an Active Directory role is not bound to a single DC, it is referred to as a Flexible Single Master Operation (FSMO) role. Currently in Windows 2000 there are five FSMO roles:

1) Schema master
2) Domain naming master
3) RID master
4) PDC emulator
5) Infrastructure daemon 

 Curtsey: Microsoft KB


Friday, July 6, 2007

How Java Cryptography Extension works - Password Based encryption Concept

In my last post I discussed about the basic API's to be used for encrypting and decrypting the data. For encrypting/decrypting data you need to have a key, in that case key needs to be stored in the system. In the password based encryption you provide the key at the encryption/decryption time manually because you remeber the key. The more complicated the password is more powerful the encryption will be. It can not be as strong against the attack as the one we generate using the API's but it can be good for encrypting data which you are going to decrypt at other end since you know the password.

Note: Password is first hashed to be used as a key not directly used as a plain text.

Ex: Key generated by API for 3DES encryption will have 2^168 many possibilities. While a normal person password length is 6-8 characters which comes to 26^6 or 26^8. I have taken 26 because there are 26 characters in the english alphabet, if you add numbers and special character it will be little higher but still no where close to the key generated by API.

To overcome the problem I mentioned above there are options to increase the security of the password based key

1) Salt: This is the extra random bits added to the password for generating key to encrypt/decrypt the data. These random bits are appended (base64 encoded) with the encrypted data in plain text so it can be used again for decryption. Each time data is encrypted new Salt is added for flavour.

2) Iteration Count
The iteration count is an attempt to increase the time that an attacker will have to spend to test possible passwords. If we have an iteration count of a thousand, we need to hash the password a thousand times, which is a thousand times more computationally expensive than doing it just once. So now our attacker will have to spend 1000 times more computational resources to crack our password-based encryption.

BASE64 Encoding

Binary data is typically stored in bytes of 8-bits. Standard ASCII is only 7 bits though, so if we want to display binary as ASCII, we're going to lose at least one bit per byte. BASE64 encoding is a way of overcoming this problem. 8-bit bytes are converted to 6-bit chunks and then into characters. Six bits are used so that some control characters can be used indicating when the data ends. The encoded characters can then be displayed on the screen and converted back into binary with no difficulty. Of course, since we're moving from an 8-bit chunk to a 6-bit chunk, we're going to have more chunks - 3 bytes becomes 4 characters and vice-versa.

Encryption using PBE

Decryption using PBE

I will give the example of PBE in another post.

I took part of the information in this post from http://javaboutique.internet.com/resources/books/JavaSec/javasec2_2.html

Thursday, July 5, 2007

How Java Cryptography Extension works - Encryption and Decryption???

Java Cryptographic Extension is a very huge topic and I am not going to write a complete book here which makes my life miserable and readers also get bored reading chapters after chapters. In this post I am going to just discuss how Encryption and Decryption works using JCE API's and then give one working example. I will post other JCE features in coming posts.

Symmetric encryption

I know most of you are aware of how symmetric encryption works and what are the benifits/downsides of this but for whom this is a new topic it's my responsibility to give little bit of idea. This encryption method has a key which is shared by both parties (Encryper and Decryptor). It is much much faster compare to assymmetric encryption but exchanging key between both the parties is a complex task. This is used where bulk of data needs to be encrypted and decrypted. Even where ever assymmetric encryption is involved it is used for actual data encryption because assymmetric also is used to exchange symmetric key between parties. ( Please refer to my post How SSL works) .

I don't want to take more of your precious time and get back to real business here

Main cryptography classes used in this article comes from javax.crypto.* package.

Most of the classes in the JCE use factory methods instead of new operator for creating class.
Cipher class is the engine of the car and following are 4 wheels on which you can enjoy the ride on the car.

Wheel 1 : getInstance()

Make a call to the class's getInstance() method, with the name of the algorithm and some additional parameters like so:

Cipher cipher = Cipher.getInstance("DESede/ECB/PKCS5Padding");

The first parameter is the name of the algorithm, in this case, "DESede". The second is the mode the cipher should use, "ECB", which stands for Electronic Code Book. The third parameter is the padding, specified with "PKCS5Padding". In case second and third parameters are not mentioned they will be taken as per the JCE provider specification.

Wheel 2 : init()

Once an instance of Cipher is obtained, it must be initialized with the init() method. This declares the operating mode, which should be either ENCRYPT_MODE or DECRYPT_MODE, WRAP_MODE, UNWRAP_MODE and also passes the cipher a key (java.security.Key, described later). Assuming we had a key declared, initialized, and stored in the variable myKey, we could initialize a cipher for encryption with the following line of code:

cipher.init(Cipher.ENCRYPT_MODE, myKey);

Wheel 3 : update()

In order to actually encrypt or decrypt anything, we need to pass it to the cipher in the form of a byte array. If the data is in the form of anything other than a byte array, it needs to be converted. If we have a string called encryptme and we want to encrypt it with the cipher we've initialized above, we can do so with the following two lines of code:

byte[] plaintext = encryptme.getBytes("UTF8");
byte[] ciphertext = cipher.update(plaintext);

Ciphers typically buffer their output. If the input is large enough that it produces some ciphertext, it will be returned as a byte array. If the buffer has not been filled, then null will be returned. Note that in order to get bytes from a string, we should specify the encoding method. In most cases, it will be UTF8.

Wheel 4 : doFinal()

Now we can actually get the encrypted data from the cipher. doFinal() will produce a byte array, which is the encrypted data.

byte[] ciphertext = cipher.doFinal();

A number of the methods we've talked about can be overloaded with different arguments, like start and end indices for the byte arrays passed in.

As we need to have a key which we will be using to encrypt/decrypt the stuff, let's discuss a bit about java.security.key interface (NOTE this is an interface).

We will be creating instance of this interface by it's implementer classes like javax.crypto.KeyGenerator or java.security.KeyFactory

Key interface has three methods

1) getInstance()
Below example will generate DES key.
KeyGenerator keyGenerator = KeyGenerator.getInstance("DESede");

2) init()
Below code will generate 3DES key which is always 168 bits.

3) generateKey()

Finally we get the key using this method.
Key myKey = keyGenerator.generateKey();

Now since we have all the necessary things to build a house lets construct it. Just think before you start constructing it. Which brick will fit at what spot to make it a perfect house.

1) We need a key which we will be using for encryption and decryption.
2) We need to instantiate Cypher class using factory method to do the actual job.

package com.kapil.util;

import java.security.Key;

import javax.crypto.Cipher;
import javax.crypto.KeyGenerator;

public class JESEncryptDecrypt
public static void main (String[] args)
throws Exception
if (args.length != 1) {
System.err.println("Please enter text to encrypt");
String text = args[0];

System.out.println("Generating a DESede (TripleDES) key...");

// Create a TripleDES key

KeyGenerator keyGenerator = KeyGenerator.getInstance("DESede");
keyGenerator.init(168); // need to initialize with the keysize
Key key = keyGenerator.generateKey();

System.out.println("Done generating the key.");

// Create a cipher using that key to initialize it

Cipher cipher = Cipher.getInstance("DESede/ECB/PKCS5Padding");
cipher.init(Cipher.ENCRYPT_MODE, key);

byte[] plaintext = text.getBytes("UTF8");

// Print out the bytes of the plaintext

System.out.println("\nPlaintext: ");

for (int i=0;i < plaintext.length;i++)
System.out.print(plaintext[i]+" ");
// Perform the actual encryption

byte[] ciphertext = cipher.doFinal(plaintext);

// Print out the ciphertext

System.out.println("\n\nCiphertext: ");
for (int i=0;i < ciphertext.length;i++) {
System.out.print(ciphertext[i]+" ");

// Re-initialize the cipher to decrypt mode

cipher.init(Cipher.DECRYPT_MODE, key);

// Perform the decryption

byte[] decryptedText = cipher.doFinal(ciphertext);

String output = new String(decryptedText,"UTF8");

System.out.println("\n\nDecrypted text: "+output);

How Caching works???

In todays web world many concurrent users access the web applications. Most of the web applications access the database (relational/hierarchical) in some way or the other to authenticate the user and validate the user rights on the web application. If web applications access the database each time user access the site then it will throw the user to go to competior because they are smart and implementing cache :-)

Web applications use caching to store the session information and authorization rights information for fast access. If web application does not manage the cache then it has to access the database to get the authorization information each time user access some link. This task is time as well as resource consuming.

When a object is retrieved for the first time from the database, instead of discarding the information it is storred in a buffer called cache. There are lot of complications storing the retrieved information in the cache:

1) Since caching is meant for faster retrieval if we keep adding newly retrieved contents from the database it will grow to unmanageable size. Smart people remove the unnecessary contents from the cache based on different algorithims. Some of them are mentioned below

a) Least Recently Used
b) Least Frequently Used

2) Implementing cache using algorithim which makes the fast retrieval/search possible.

3) Consider a scenario in which user has rights to access 10 links on a site initially and those rights are cached. Later on those access rights are modified (added more rights or removed some extra rights) from the admin console. If the cache is not updated on time user will not be able to access what S/he has right to access. To overcome this problem chaching should be updated at the same time when rights are modified.

4) Consider another scenario in which there are two servers behind the load balancer and cached objects are local to the servers (i.e cache is not shared between both the servers). If one of the server goes for a toss what will happen for the cached objects. To overcome this scenario chache manager needs to send notification of the newly added/removed cached object to other server's in the cluster.

5) Implementing max time for which a cached object be alive in the cache. After that max time is reached the object should be removed from the cache and recached based on next request from the user.

6) Caching parametes should be configurable so administrators can change them based on requirements.

Caching frameworks

Several object-caching frameworks (both open source and commercial implementations) provide distributed caching in servlet containers and application servers. A list of some of the currently available frameworks follows:

Open Source:
Java Caching System (JCS)
Java Object Cache (JOCache)
Java Caching Service, an open source implementation of the JCache API (SourceForge.net) SwarmCache
IronEye Cache

SpiritCache (from SpiritSoft)
Coherence (Tangosol)
ObjectCache (ObjectStore)
Object Caching Service for Java (Oracle)

Technorati :

How reverse proxy works ???

You must have heard the term reverse proxy couple of times but wondered what the hack is this. I am going to give some idea about it in this post but before going to reverse proxy i would like to give an idea how forward proxy works.

Forward Proxy: A forward proxy acts as a gateway for a client's browser, sending HTTP requests on the client's behalf to the Internet. The proxy protects your inside network by hiding the actual client's IP address and using its own instead. When an outside HTTP server receives therequest, it sees the requestor's address as originating from the proxy server, not from theactual client. In the organizations you configure this in your browser setttings and most of the things happens behind the scene.

Reverse Proxy: Reverse proxy works when a request is sent to the organization web server from outside. It sits in front of the web server.
It acts as a gateway to an HTTP server or HTTP server farm by acting as the final IP
address for requests from the outside. The firewall works tightly with the Reverse Proxy to help
ensure that only the Reverse Proxy can access the HTTP servers hidden behind it. From the
outside client's point of view, the Reverse Proxy is the actual HTTP server.

Benefits of Reverse Proxy

  1. Clients now have a single point of access to your HTTP servers.

  2. You have a single point of control over who can access and to which HTTP servers you allow access.

  3. Easy replacement of backend servers or host name changes.

  4. Ability to assimilate various applications running on different Operating Systems behind a single facade.

Downside of Reverse Proxy

  1. If reverse proxy fails and there is no failover suppored then whole network access goes for a toss.

  2. If an attacker does compromise Reverse Proxy, the attacker may gain more insight into your
    HTTP server architecture; or if the HTTP servers it is hiding are inside the firewall, the attacker might be able to compromise your internal network.

  3. A lot of translations have to occur for the Reverse Proxy and the
    firewall to do its translations, so requests may be fulfilled a little more slowly.

Many web servers plug-ins are available which support the reverse proxy functionality. For example Apache module mod_proxy supports both forward and reverse proxy settings based on requirements.

Sunday, July 1, 2007

Static Vs Dynamic LDAP Groups

LDAP directory servers contain information about people: users, employees, customers, partners, and others. Many times, it makes sense to associate entries together in groups. A group is basically a collection of entries. These entries can be statically assigned to a group or can have a set of common attributes on which they can form a dynamic groups.

1) Static Group

A static group defines each member individually using the structural objectclass groupOfNames, groupOfUniqueNames, etc depending on Directory Server implementation. These objectclasses require the attribute member (or uniqueMember in the case of groupOfUniqueNames). These groups are good if the number of users in a group is not large because group contains an entry for each user who belong to this group. The more number of people assigned to the group more complicated the task to manage that group.

2) Dynamic Group

Dynamic groups allow you to use a LDAP URL to define a set of rules that match only for group members. For Dynamic Groups, the members do share a common attribute or set of attributes that are defined in the memberURL filter. These are good if the number of users in the group are very large. It's a much better choice for a dynamic group than a static group because the set of members will be automatically adjusted as new users are added, existing users are removed

Example :

dn: cn=Austin Users,ou=Groups,dc=example,dc=com
objectClass: top
objectClass: groupOfURLs
cn: Austin Users

In the above example all the users who belong to Location as Austin or State as Texas belongs to Austin Users.


Roles are a similar to groups, but work differently. Groups are effectively listings of members. In order to find out, for example, which groups "David" belongs to, you would need to look at every group and see if it contains "David". Roles, on the other hand, are associations that are stored in users' entries themselves.

As a member of a role, you have the authority to do what is needed for the role in order to accomplish a job. Unlike a group, a role comes with an implicit set of permissions. There is not a built-in assumption about what permissions are gained (or lost) by being a member of a group.

Technorati :

Hub and Switch and Router

I was doing a udemy course to learn more about the networking concepts and wanted to clarify the confusion between Hub, Switch and Router. ...