Friday, June 29, 2007

How to get plain text password in Active Directory???


I have seen many products synchronizing password/ sending password synch events to IDM products when user changes the password in Active Directory. Due to security reasons Windows does not allow users to get the plain text password once stored in the directory but Microsoft has given a way in case we have to get the plain text password for above reasons and also to enforce a specific password policy which can not be configured out of the box configurations.

There can be chain of password filter DLL's installed which will be called one after the other in the sequence defined in the registry (I will discuss this configuration in a bit)



Password Filters

Password filters provide a way for you to implement password policy and change notification.

When a password change request is made, the Local Security Authority (LSA) calls the password filters registered on the system. Each password filter is called twice: first to validate the new password and then, after all filters have validated the new password, to notify the filters that the change has been made. The following illustration shows this process.





Important Functions


InitializeChangeNotify
Indicates that a password filter DLL is initialized.
PasswordChangeNotify
Indicates that a password has been changed.
PasswordFilter
Validates a new password based on password policy


To install and register a password filter DLL
Copy the DLL to the Windows installation directory on the domain controller or local computer.
To register the password filter, update the following system registry key:
HKEY_LOCAL_MACHINE SYSTEM CurrentControlSet Control Lsa
If the Notification Packages subkey exists, add the name of your DLL to the existing value data. Do not overwrite the existing values, and do not include the .dll extension.
If the Notification Packages subkey does not exist, add it, and then specify the name of the DLL for the value data. Do not include the .dll extension.
The Notification Packages subkey can add multiple packages.
Find the password complexity setting.
In Control Panel, click Performance and Maintenance, click Administrative Tools, double-click Local Security Policy, double-click Account Policies, and then double-click Password Policy.
To enforce both the default Windows password filter and the custom password filter, ensure that the Passwords must meet complexity requirements policy setting is enabled. Otherwise, disable the Passwords must meet complexity requirements policy setting.


Curtsey: I took some of the content in the article from Microsoft site directly.



how to configure service in Linux???


After working in windows environment for years we want to have similar kind of flexibility in Linux environments where we don't have to remember where our scripts are residing and how to start and stop the applications we want to use. Linux has a very good solution for this problem. I am going to discuss here how to configure the service in Linux and how to use that.

1) The application you want to configure as a service under linux should have a script which has start and stop option in it. for example

case "$1" in
start) --> Start the application
stop) --> Stop the application

$1 here is the argument passed with the service command. For example "service tomcat start".

here tomcat is the name of the script which has start and stop options. This script should be placed under /etc/init.d directory.

2) Script should have following three things

1) Execute permissions
2) Description comment
3) Chkconfig comment

I had a script which was missing description field which I felt is not mandatory but when I tried to add that script as a service it was failing.

3) For adding a script as a service use the chkconfig command as follows
chkconfig --add


4) For listing all the services

chkconfig --list

5) for getting the status of all the services

service --status-all

6) Setting the run level of the service

chkconfig --level 345
offon



Technorati :

How to see LDAP protocol working over the network???


Many network protocols like HTTP and SMTP are text-based, which means that
it is relatively simple to decode that information if it is intercepted over the wire.
LDAP, however, is binary protocol that uses the ASN.1 basic encoding rules
specification to encode all communication. While some components of LDAP
communication (e.g., distinguished names) may be decipherable, it is
significantly more difficult to interpret other data elements.
To address this problem, the LDAPDecoder utility provides a means of
interpreting LDAP communication and displaying it in a human-readable form.
This can be very useful for debugging problems with the interaction between
LDAP clients and a directory server, or to simply gain a better understanding of
the structure of LDAP traffic.

LDAPDecoder can be downloaded from http://www.slamd.com/download.shtml location.

Unpack the LDAPDecoder.jar and the howto guide from the compressed file in a folder.

for running LDAPDecoder use following command

java -jar LDAPDecoder.jar -L
-f


once it is listening you can run the ldapsearch/ldapmodify and check the log file to see how LDAP server responds to the request.

I am giving the request response sequence for ldapsearch

New client connection from 127.0.0.1:1298
-- Read data from the clientDecoded Data from Client:
LDAP Bind Request
Message ID: 1
LDAP Bind Request Protocol Op
LDAP Version: 3
Bind DN: cn=admin
Authentication Data:
Authentication Type: Simple
Bind Password: password

Read data from the serverDecoded Data from Server:
LDAP Bind Response Message ID: 1 LDAP Bind Response Protocol Op Result Code: 0 (Success)

Read data from the clientDecoded Data from Client:
LDAP Search Request
Message ID: 2 LDAP Search Request Protocol Op Base DN: dc=am,dc=sony,dc=com Scope: 2 (wholeSubtree) Deref Aliases: 0 (neverDerefAliases) Size Limit: 0 Time Limit: 0 Types Only: false Filter: (uid=abcd) Attributes:

Read data from the serverDecoded Data from Server:

LDAP Search Result Entry Message ID: 2 LDAP Search Result Entry Protocol Op dn: uid=abcd,ou=users,l=america,dc=am,dc=sony,dc=com
mail: Jack.Bauer@sun.com
cn: Bauer Jack
sn: Jack
givenName: Bauer
uid: abcd
objectClass: top
objectClass: person
objectClass: organizationalperson
objectClass: inetorgperson
objectClass: inetuser
objectClass: sonyperson
objectClass: americasonyperson
userPassword: {SSHA}7h3HPwNNIYAecfrYbigXsQinNqW2N/gqGxECLw==



Technorati :

Thursday, June 28, 2007

List of SSO Products


Here are some of the industry wide used SSO products.




Technorati :

Catalyst 2007


Yesterday I attended catalyst 2007. I met couple of different people looking for solutions in the IDM space. Here are few of them which i like to post in this blog

1) Intelligent Policy engine for IDM: One organization (I am not sharing the name) was looking for a policy engine which can interact with IDM product to provision user account on different systems (100+) based on user attributes. Administrators will manage these policy engine to define which user attributes will decide access to which applications. This policy engine is used by IDM product to provision the account on different systems. Currently IDM products are having rules where provision policy can be defined but they are very complicated to manage. Administrators who are not comfortable with the vendor specific language can not manage business policies. I will talk about this design in detail in separate topic.

2) Desktop SSO: I met with a product manager selling the idea of Desktop SSO. Idea is to pass the user credentials to end application (OS[Linux/unix], web application, legacy application etc) when prompted on application access. The product stores the credentials inside the desktop in encrypted manner for security reasons.

3) One organization has built the product which can be integrated with oracle IDM suite for provisioning physical access control.



Technorati :

Tuesday, June 26, 2007

How SSL Hand Shake works???


The purpose of the SSL Handshake is a multifaceted one. It involves server (and optionally client) authentication, determining what cryptographic algorithms are to be used, and the generating of a secret key by which all future SSL Data Exchanges are to be encrypted by. The steps by which this is accomplished are as follows:

Client Hello
The main purpose of the client hello is to inform the server of what cryptographic algorithms the client can support and to ask for verification of the server's identity.

The client sends to the server three things:
· Set of supported cryptographic protocols.
· Compression algorithms.
· Random number.

The purpose of the random numbers is if the server does not have a Digital ID, a secure connection can still be made using a different set of algorithms(like Diffie-Hellman), although it is impossible to verify their identity.

Within the set of cryptographic protocols there is the Key Exchange protocol (how we are going to exchange the information), the secret Key algorithms (what encryption methods can we use), and a one-way hash algorithm. No secret information is passed at this point, just a list of choices


Server Hello
The server responds by sending the client it's Digital ID (which includes it's public key), the set of determined cryptographic and compression algorithms, and another random number. The decision as to which cryptographic algorithms are to be used is based on which ever are the strongest that both the client and server support. In some situations, the server may also ask the client to identify themselves as well (by requesting a Digital ID).

Client Approval
This step involves the clients browser checking the validity of the Digital ID sent by the server. This is first done by decrypting the Digital ID using the Public key of the certificate's issuer and determining that the certificate comes from a trusted certificate authority. After it has been decrypted, a series of validity checks occur. This includes checking validity dates and making sure that web page URL matches the one listed in the certificate.

Once the server's identity has been verified, the client randomly generates the secret key and encrypts it using the server's public key (Server has sent this to Client in first step) and the previously determined cryptographic and compression algorithms. The client then sends the encrypted secret key to the server.

Verification
At this point both parties know the secret key. The client knows the secret key because the client generated it, and the only other person who knows it is the server, because a message encrypted with the server's public key can only be decrypted by the server's private key. A final check is done to ensure that the previous transfers have not been tampered with. both parties send a copy of all the previous transactions encrypted with the secret-key. If both parties confirm the validity of the transactions, the handshake is completed. Otherwise the handshake process is re-initiated.

Both parties are now ready to communicate securely using the agreed upon secret-key and cryptographic/compression algorithms. The SSL handshake is only done once and the secret-key is used for only one session.

Graphical Representation of Steps








Technorati :

Is probing login.jsp from loadbalancer good option???


Loadbalancing is good for applications for failover conditions. Loadbalancers needs to know the status of the server for redirecting the clients. Many a times people thinks of pointing to login page for load balancer. This seems to be simple but think about the complication of a login page before pointing from load balancer.

Before describing the responsibilities of login page i would like to give an example of load balanced application.

There are two servers behind the load balancer and if a user is logged into one server(Server 1) and directly goes to another server (Server 2). Server 2 should not redirect the user to login page again as backend is same for both the servers.

Now lets see what login page responsibilites are
1) Check if the user is authenticated or not.
2) If the user is authenticated then redirect the user to requested page.
3) Check what access control user has to access the application.
4) Cache the user access rights if user is authenticated.

These are some of the examples and for performing every operation Login page has to call lots of vendor specific API's which are costly and some times they have memory leaks.

Think about a scenario when load balancer is configured to probe each server every 5-10 seconds.

Think again ????

I hope you might have already guessed that Login page is not the right approach at all.

Then what is the solution???

It's always better to build a very light weight application or simple page which is hosted on the server which tells if the server is up or not.

But this will just tell if the server is up or not, what about the repository server which server contacts???

You question is absolutely correct. But again instead of configuring Login page which is costly as discussed above it's better to build a custom page which just checks if the backend repository is up or not.

Hope this helps to some of my friends reading this article.



Technorati :

Tips for Developing efficient LDAP Applications


I took this topic from sun blog but added some example code to show what the topic means so part of credit goes to them.

* Make sure to use LDAPv3 rather than LDAPv2. Some APIs still default to LDAPv2, but LDAPv2 doesn't support features like controls, extended operations, referrals, SASL authentication, and multiple binds on the same connection.

* Use at least minimal caching to avoid repeating the same queries. If you include a list of attributes to return, then make sure that you include all attributes you may need rather than performing different queries to retrieve the same entry with different attribute lists.

For example :

use this
String RETURN_ATTRIBUTES[] = { "mail","givenname","sn" };
Attributes attrs = ctx.getAttributes(DN, RETURN_ATTRIBUTES);
insted of

String RETURN_ATTRIBUTES[] = { "mail","givenname","mail" };
Attributes attrs = ctx.getAttributes(DN, RETURN_ATTRIBUTES);

String RETURN_ATTRIBUTES[] = { "mail","givenname","sn" };
Attributes attrs1 = ctx.getAttributes(DN, RETURN_ATTRIBUTES);

String RETURN_ATTRIBUTES[] = { "mail","givenname","givenname" };
Attributes attrs2 = ctx.getAttributes(DN, RETURN_ATTRIBUTES);


* Design your application to allow for loose consistency in replication and the possibility that reads and writes may happen on different systems without the application's knowledge. Avoid read-after-write behavior because it can have inconsistent results.

What I understood : In a load balanced environment when application uses connection pooling it may get connections from different servers. Some times if write operation is performed on the consumer, it may redirect the request to the master. Master may take some time to replicate the change to all consumers. If you try to do read after write there may be some inconsistencies.

* Don't treat the Directory Server like a relational database. Avoid splitting data into separate pieces so that you need to retrieve multiple entries to get all the information about a given entity.

* If you generate search filters, then do so intelligently. If you have compound filters, then use a form like "(&(a=b)(c=d)(e=f))" rather than "(&(&(&(a=b))(c=d))(e=f))" to avoid unnecessary nesting.

Unnecessary nesting may make the filter more complicated to understand and take more time to perform the operation as in the above case LDAP server will have to process 3 AND operators in the second query instead of 1 AND in the smart query. Softerra browser has a very good interface for building queries.

* Unbind connections when they're no longer needed. It's generally best to re-use connections as much as possible, but whenever you're done with a connection make sure it gets closed.

* Don't litter your code with hard-coded attribute/objectclass names, base DNs, server addresses/ports, usernames/passwords, etc. If you need to change something later, it can be hard to make sure that everything gets updated properly. You should centralize all such values in a constants class or a properties file so that they are simple to change if necessary.

* Where possible, maintain a set of persistent connections to the server (i.e., connection pools) rather than connecting and disconnecting for each operation. This will be much more efficient, especially when using SSL. In order to avoid leaking connections and duplicating large amounts of code, it may be a good idea to code the various types of operations into the connection pool itself so that those operations will check out a connection, perform the operation and any necessary error handling, and make sure the connection is put back into the pool.

I promise I will add a topic to explain how to write code for connection pooling in LDAP.

* Design your application to be able to handle the different kinds of failures that may arise: server down, network outage, DS backlogged or unresponsive, DS returning unexpected responses (e.g., unavailable or busy). Don't assume that a lost connection means the server is down -- it could be that the connection was closed due to the idle timeout or some other constraint.



Technorati :

Structural Vs Auxillary Object class ??

What Are Objectclasses?
Objectclasses are prototypes for entries that will actually exist in directory server. The objectclass definition (uses ASN.1 syntax) specifies which attributes may or must be used by LDAP entries declared as instances of a particular objectclass.

When schema needs to be designed for custom implementation people do have confusion when to use structural and when to use auxillary object classes.

Thumb Rule: Every LDAP entry must use a STRUCTURAL objectclass. Furthermore, it can have only one STRUCTURAL objectclass. It can have any number of Auxillary object classes. If the auxillary object class is attached to the entry attributes attached to that auxillary object class (MAY/MUST) will also be attached.

For example in an organization there are multiple applications which needs specific attributes based on applications assigned to the user or not. All the applications can be represented using Auxillary object class. This way whatever application is attached to the user, attributes specific to that attributes will be assigned to the user.


I will give example on defining Structural Vs Auxillary object class but lets just read once the definition of each type of object class from SUN documentation

Abstract: defined only as a superclass or template for other (structural) object classes. An abstract object class is a way of collecting a set of attributes that will be common to a set of structural object classes, so that these classes may be derived as subclasses of the abstract class rather than being defined from scratch. An entry may not belong to an abstract object class.

--> Think of these as abstract class in java. They are just a helper classes so structural object classes can simply extend them for common attributes. For example top is a abstract object class.

Structural: indicates the attributes that the entry may have and where each entry may occur in the DIT. This object class represents the corresponding real world object. Entries must belong to a structural object class, so most object classes are structural object classes.


Auxiliary: indicates the attributes that the entry may have. An auxiliary object class does not represent a real world object, but represents additional attributes that can be associated with a structural object class to supplement its specification. Each entry may belong to only a single structural object class, but may belong to zero or more auxiliary object classes.


As promised here are example of how you can define object classes.

objectClasses: ( 1.3.6.1.4.1.20502.1.2.12 NAME 'ABCAPSMUser' SUP top AUXILIARY MAY absAPSMId X-ORIGIN 'user defined' )

objectClasses: ( 1.3.6.1.4.1.20502.1.2.8 NAME 'ABCuser' SUP inetorgperson STRUCTURAL MUST ABCuserstatus MAY ( gender $ absuserpassword $ inetUserStatus $ personalTitle $ mailAlternateAddress $ mailHost $ mailRoutingAddress ) X-ORIGIN 'user defined' )

How to rename LDAP DN ?


Often a times I hear from my friends that they have a requirement to rename the user id in LDAP/Active Directory. They tried to modify the DN attribute directly but that does not help as DN is an operational attribute which can not be directly modified.

Correct approach is to call the rename API from JNDI and that takes care of the job. I am pasting a sample code ( changing the which I feel will help some one who is also looking for similar functionality.

public static boolean changeId(DirContext ctx, String p_oldID, String p_NewID)
{
String RETURN_ATTRIBUTES[] = { "uid","objectclass","modifytimestamp"};
String DN = null, LDAPuid = null ;
boolean status = true;
int i = 0;
Attribute attr = null;

String newid = "\"cn=" + p_NewID +",OU=Users,OU=Test\"";

String empNo = null;

try
{
// Make LDAP connection
//ModificationItem[] mods = new ModificationItem[1];
String SEARCH_FILTER = "(cn=" +p_oldID+")";

SearchControls constraints = new SearchControls();
String newline = System.getProperty("line.separator");
// Set search scope to subtree
constraints.setSearchScope(SearchControls.SUBTREE_SCOPE);
NamingEnumeration results = ctx.search("", SEARCH_FILTER, constraints);

while ( results != null && results.hasMore() )
{
//String s = "";
SearchResult sr = (SearchResult) results.next();
DN = sr.getName();
System.out.println("DN is" + DN);
i++;
Attributes attrs = ctx.getAttributes(DN, RETURN_ATTRIBUTES);

attr = attrs.get("uid");
LDAPuid = (String)attr.get();
System.out.println("UID is " + LDAPuid + newline);

ctx.rename(DN,newid);

System.out.println("Employee Number | New User ID");
// System.out.println(empNo + " " + p_NewID );

}
System.out.println("Total no of records are " + i);
System.out.println("Now changing the userid in LMS ....");


}

catch(Exception e)
{
System.out.println("In the exception block" );
e.printStackTrace();
return false;
}
return status;
}



Technorati :

How SPML works


What is SPML?

The Service Provisioning Markup language (SPML) is the open standard protocol for the integration and interoperation of service provisioning requests. SPML version 1.0 is a draft OASIS standard due for ratification in Summer 2003.

What does 'service provisioning' mean?

Service provisioning refers to the "preparation beforehand" of IT systems' materials or supplies required to carry out a specific activity. It goes beyond the initial "contingency" of providing resources, to encompass the entire lifecycle management of these resources. This includes the provisioning of digital services such as user accounts and access privileges on systems, networks and applications, as well as the provisioning of non-digital or "physical" resources such as cell phones and credit cards.


How it is used?

Products like Sun Identity Manager provides SPML interface so end applications/systems can provision/de-provision user accounts on the system. Designers should expose the web service calls which internally use the SPML calls to interact with the system. This way different systems can integrate with centralized system for necessray action. Extra care should be taken before exposing the web-service calls to the whole world as web service is exposing critical functions.

Here is a little program which I have written for creating user account using SPML on Sun Identity Manager

import org.openspml.client.LighthouseClient;
import org.openspml.message.ExtendedRequest;
import org.openspml.message.ExtendedResponse;
import org.openspml.message.SpmlResponse;

public class CallingCustomWorkflow {

/**
* @param args
*/
public static void main(String[] args) {
try
{
LighthouseClient client = new LighthouseClient();
client.setTrace(true);
client.setUrl("http://localhost/idm6/servlet/rpcrouter2");
client.setUser("configurator");
client.setPassword("password");
ExtendedRequest req = new ExtendedRequest();

req.setOperationIdentifier("launchProcess");
req.setAttribute("accountId","abcdefg");
req.setAttribute("firstname","abcd");
req.setAttribute("lastname","abcd");
req.setAttribute("password","gdswer");
req.setAttribute("resources","LDAP");


req.setAttribute("process","SPMLWorkflow");
SpmlResponse res = client.request(req);



}
catch (Exception e)
{

}
}

}

How HTTP Keep Alive works?


HTTP Keep Alive
Http Keep-Alive seems to be massively misunderstood. Here's a short description of how it works, under both 1.0 and 1.1, with some added information about how Java handles it.

Http operates on what is called a request-response paradigm. This means that a _client_ generates a request for information, and passes it to the server, which answers it. In the original implementation of HTTP, each request created a new socket connection to the server, sent the request, then read from that connection to get the response.

This approach had one big advantage - it was simple. Simple to describe, simple to understand, and simple to code. It also had one big disadvantage - it was slow. So, keep-alive connections were invented for HTTP.

HTTP/1.0
Under HTTP 1.0, there is no official specification for how keepalive operates. It was, in essence, tacked on to an existing protocol. If the browser supports keep-alive, it adds an additional header to the request:

Connection: Keep-Alive

Then, when the server receives this request and generates a response, it also adds a header to the response:

Connection: Keep-Alive

Following this, the connection is NOT dropped, but is instead kept open. When the client sends another request, it uses the same connection. This will continue until either the client or the server decides that the conversation is over, and one of them drops the connection.

HTTP/1.1
Under HTTP 1.1, the official keepalive method is different. All connections are kept alive, unless stated otherwise with the following header:

Connection: close

The Connection: Keep-Alive header no longer has any meaning because of this.

Additionally, an optional Keep-Alive: header is described, but is so underspecified as to be meaningless. Avoid it.


Curtsey: http://www.io.com/~maus/HttpKeepAlive.html

Monday, June 25, 2007

Commonly used openssl commands


Here I am giving some of the commonly used openssl commands.


• Generate A Certificate Signing Request
openssl req -new -newkey rsa:1024 -keyout hostkey.pem -nodes -out hostcsr.pem
• Create A Self-Signed Certificate From A Certificate Signing Request
openssl req -x509 -days 365 -in hostcsr.pem -key hostkey.pem -out hostcert.pem
• Generate A Self-Signed Certificate From Scratch
openssl req -x509 -days 365 -newkey rsa:1024 -keyout hostkey.pem -nodes -out hostcert.pem
• Generating a certificate using the ca certificate generated above
openssl x509 -req -in sonycsr\ldapssllocal.pem -CA sonycerts\ca.pem -CAkey sonykeys\cakey.pem -CAcreateserial -out sonycerts\ldapssl.pem -days 1024
• View The Contents Of A Certificate Signing Request
openssl req -text -noout -in hostcsr.pem
• View The Contents Of A Certificate
openssl x509 -text -noout -in hostcert.pem
• View The Signer Of A Certificate
openssl x509 -in cert.pem -noout -issuer -issuer_hash
• Verify A Certificate Matches A Private Key
openssl rsa -in key.pem -noout -modulus
• Find The Hash Value Of A Certificate
openssl x509 -noout -hash -in cert.pem
• Create A Private Key
openssl genrsa -des3 -out key.pem 1024
• Encrypt A Private Key
openssl rsa -des3 -in hostkeyNOPASSWORD.pem -out
• Decrypt A Private Key
openssl rsa -in hostkeySECURE.pem -out hostkeyNOPASSWORD.pem
• Convert PEM Format Certificate To PKCS12 Format Certificate
openssl pkcs12 -export -in cert.pem -inkey key.pem -out cred.p12
• Convert PKCS12 Format Certificate To PEM Format Certificate
openssl pkcs12 -in cred.p12 -out certkey.pem -nodes -clcerts



Technorati :

Advanced unix/linux commands


1) Find a file in the whole computer --> find / -type f -name
-print
2) Find a file pattern --> find . -type f -name "*
*" -print
3) Delete all cores in the system --> find / -type f -name core -exec /bin/rm -f {} \;
4) Find all files with a word in them --> find . -type f -exec grep -l
{} \;
5) Find files modified longer than a month ago --> find . -type f -ctime +30 -print
6) Use found files more then once with xargs --> find . -name "*.c" -print xargs -i cp {} {}.bak
7) Don't search in nfs mounted filesystems --> find . -local ...
8) Look for files larger than 1 megabyte --> find /path -size 1000000c -print
9) Run find but discard the "permission denied"'s find ... 2>/dev/null ( in sh/bash/ksh only)
10) How to find the disk usages --> du -S sort -n > chksize.txt
11) How to get the disk space usage --> df -h (this will show space in readable format)
12) Getting folder size in readable format --> du -hs /path/to/folder
13) Sorting the files in Linux by file size --> ls -Shl more



Technorati :

Active Directory Accountlock Vs Disabled


When an account is locked in AD "lockoutTime" attribute is set to the time when the account was locked.
If account was never locked then the user record will not have "lockoutTime" attribute.

If account is disabled then useraccountcontrol will be set to 514 or 546

Howto install MySQL as a service using custom INI file


mysqld --install
--defaults-file=


--defaults-file option is required to fix the 1067 error which comes most of the times.

How to Make strong password in Unix


Wondering how to make a safe password ? mkpasswd is the solution.
This is normally standard in all distributions on Linux/Unix.

Example: This will produce an unique 8 letter password with minimum 2 digits and 3 letters in upper
case: $mkpasswd -l 8 -d 2 -C 3

Hub and Switch and Router

I was doing a udemy course to learn more about the networking concepts and wanted to clarify the confusion between Hub, Switch and Router. ...