Friday, July 20, 2007

Who rebooted the Linux system

The availability of details is depended on the syslog's settings, but in any case you can do following:

1. Get the boot time. You can get it by couple of ways, as you can type "uptime" commands and count back for how long it was on, or you can go to
/var/log and see the boot.log file, or in the same directory see "messages" file and look for "syslog started" time stamp.

2. type "last" command and see who were the uses logged in at the time when system had been rebooted

3. See these users shell history files in ~username/.bash_history for su or sudo commands.

All the aforesaid makes sense ONLY if you have proper access to root account and no one but root user knows the root's password. If you guys share the root password it is almost impossible to find who had rebooted the system. The only chance if you had systlog set to record network events. You can see in /var/log, messages and security logs for connections with a time-stamp kept alive around the reboot. Given your DHCP is long leasing or static IPs were used/or logs entries resolve DNS you can get the list of suspects. Then you proceed to step 3.

Have in mind that if someone INTENTIONALLY reboot the system and had complete root access and posses some skills, it is not only impossible to track, he/she may forge logs in any desirable way.

DO NOT SHARE ROOT ACCESS! USE "SUDO" TO PROTECT ROOT ACCOUNT!



Curtsey : http://www.unix.com/unix-for-dummies-question-and-answers/27272-how-to-identify-who-rebooted-the-linux-server.html

Tuesday, July 17, 2007

How to get the oracle version


Some times we comes across in the situation where we have to find which version of Oracle is running. Here is a simple query which you can run to find the exact oracle verison



SELECT * FROM V$VERSION



I had a strange scenario where I had to connect to different version of oracle instances from a single Web app. Below oracle site helps finding the driver which can match those versions driver requirements.



http://www.oracle.com/technology/tech/java/sqlj_jdbc/htdocs/jdbc_faq.htm#02_02


Thursday, July 12, 2007

can chmod command be dangerous???

We all think that chmod is there just to help us and it can never do any harm to the system. But hey wait for a sec... too much of a permission can be dangerous too. I came to know about this today when one of my team member has just ran the below command on one of our Linux servers.

chmod -R 777 *

As soon as this command is executed Linux system has suspected something wrong has happened and it stopped serving any service. I was not even able to do ssh to the box. I had to change the permissions (reduce the permissions) on the files to make the system ssh working.

Wednesday, July 11, 2007

User permission management in Linux

We have been doing identity management and access control for a long time now but few times we don’t put much of restrictions on our development team keeping in mind that they are our friends. I believe that is true but still I feel that it is very necessary to define fine grained access control to all the people.


In this post I am going to discuss some of the very common and simple Linux user management tasks.


1) Adding a new user to Linux (useradd)


-d home directory
-s starting program (shell)
-g (primary group assigned to the users)
-G (Other groups the user belongs to)
-m (Create the user's home directory


example:


useradd -gusers -Gmgmt -s/bin/shell -d/home/roger -m roger


2) Modifying existing user (usermod)


-d home directory
-s starting program (shell)
-p password
-g (primary group assigned to the users)
-G (Other groups the user belongs to)


example:


usermod -Gothers roger


3) Deleting a user (userdel)


-r (remove home directory)


example:


userdel -r roger


4) /etc/passwd is the file which keeps User names and primary groups. Format of the file is


User name (normally all lower case)
Password (encrypted - only contains the letter 'x')
User ID (a unique number of each user)
Primary Group ID
Comment (Normally the person's full name)
Home directory (normally /home/<user name>
Default shell (normally /bin/bash)


Each field is separated by a colon.


5) Password for each user is stored in /etc/passwd file


6) Group information for the user is stored in /etc/group. Format of this file is


Group name
Group password (hardly ever used)
Group ID
User names (separated by commas)


Note: Do not edit this file directly. Edit the user using the command usermod which will directly modify this file.


Sudo


As I mentioned earlier you don’t want users to use a shared account. Sudo is there to help us achieving this task. I am going to give some simple usages by which this can be used


1) Sudo permissions are stored in the file /etc/sudoers


2) Never edit the file using vi. Use visudo to edit the file.


visudo -e -f /etc/sudoers


3) Add the users into group for which you want to assign sudo permissions. This way sudo file will look clean.


4) Enable sudo logging by putting below text in sudoers file


Defaults logfile=/var/log/sudolog


There is a lot more which can be done using sudoers but here I am to give real life usable things not to put man pages of linux. Please use man page if you want more :-)

Tuesday, July 10, 2007

Single Sign On - Reduced Sign On


SSO provides flexibility to the user so they don't have to enter the credentials again and again for accessing different applications. Every one of us are happy about it but there is a side effect of this solution. For example if you are logged into the system which works on SSO and does the SSO to the payroll site. What will happen you signed into this portal and went for a cup of coffee with your friend and forget to lock the system. Your neighbour who is always intrested to know how much you are earning gets a chance to move his chair to your desk and get that information quickly.



This was just an example, there could be multiple secure applications which reside in enterprise portal and are critical for you. That's the reason organizations are adapting the concept of Reduced Sign On.



Reduced Sign On: This concept handles the above scenario by prompting another set of verification when you try to access critical applications. This extra layer of authentication could be any one of below list:



1) Challenge Question



2) Digital Certificate



3) Hardware Token number



4) Smart Card



5) Biometrics



Reducing users' sign-on complexity problems requires a balance between user satisfaction and security. If the scale swings too far toward security when trying to prevent a breach, user satisfaction decreases. Similarly, if the scale swings toward user satisfaction, you can compromise IT security.






Importance of Time Server in SSO environment


Once I deployed an SSO agent on a client location and was scratching my head for weeks to find what is that which is stopping the SSO to work properly. I checked my configurations hundred of times installed the SSO agent on other server with same configuration and it works there but what the hack is going on with this machine which is creating this issue. I checked the OS patch and everything but still NO LUCK !!!



In this article I am going to talk little bit about the root cause of that.



Time Server: SSO Token contains a time stamp which is generated by the server to check the session timeout. My server was residing on a box which was having a time stamp say T and my agent was residing on a box which was having a timestamp T+30 min. Session expiry was 30 minutes.



That's why whenever my agent box gets the SSOtoken and validates the token it always gets the token which is expired.



In the SSO environment please make sure all the servers are having time clock synchronized otherwise you may also have this tricky to debug situations.


Types of attack on Password


It always seems to be very simple when we type our credentials to get into banking sites to do some transaction or commercial sites for purchasing some stuff but in this post I am trying to explain some of the types of attacks on password which can make you bankrupt, I am no kidding read on:




1) Hardware Device: When we talk about hardware we think that it will take time to install and only experts will be able to use this attack but NO this hardware device is very simple to install and can be installed by a kid in 10 seconds or less. See the below image to get an idea how simple it could be. Criminals have installed this device on bank machines to get the bank credentials which has costed millions of pounds to banks. Students installed them on there teachers system to get an access to the exam papers. there could be lot of other instances where these simple plugs can be installed and exploited.







2)Software Malware - Keyboard logger: We all enjoy free stuff and now and then we tend to use free softwares available on the internet. These softwares can save couple of dollars in your pocket but may cost you a lot. Imagine a scenario in which you down-loaded some free software and that software has a malware which modifies the OS kernel to get your credentials when you login to the system. These malwares can also capture the credentials when you access different banking sites. They can store these passwords locally and send them to there servers where attackers can use your bank credentials to transfer money to there bank or play in casino. Be very careful when you use free softwares.




3) Dictionary Attack: We all use dictionary and know that most of our password comes from one or the combination of words from dictionary. Yes you got me what I am going to talk. Smart people have written softwares which can be hooked in a PC to try all the possible combinations of password on a system. One solution to this problem which many organizations/banks have already implemented are locking the account after N number of unsuccessful password attempts.




4) Social Engineering Attack: If you get a call from a person saying that he is from the security team and they got an alert that your account is having some problem and it may lead to delete all of your data from the box. They can fix that for you if you can just let them know your account password, there are chances that some of us will agree to this and simply give it. I have seen many organizations where users give there choice of password to the help-desk persons and ask them to reset there password to the one they want. Users don't realize that this can give open door for other person to access to the secure stuff they should not be looking into.




5) James Bond Attack: Research institutes are challenging that they can listen to the keystrokes and guess the users password with 90% of accuracy. This is one of the reason that very confidential rooms does not let even a single voice go out of the room.





Monday, July 9, 2007

Active Directory PDC vs FSMO

Today I faced strange issue in my environment which forced me to read about FSMO. Let me briefly give an idea about the problem : We have enabled Bi-Directional password synch which requires an agent to be installed on the Active Directory (AD). In some cases when user changes the password in the microsoft way (CTRL+ALT+DEL) screen just hangs. While troubleshooting my AD team told me that the agent is installed on FSMO and I had no idea what the hack he is talking about so I read about it and thought of posting the same here.


FSMO stands for Flexible Single Master Objects


Windows 2000 Multi-Master Model


A multi-master enabled database, such as the Active Directory, provides the flexibility of allowing changes to occur at any DC in the enterprise, but it also introduces the possibility of conflicts that can potentially lead to problems once the data is replicated to the rest of the enterprise. One way Windows 2000 deals with conflicting updates is by having a conflict resolution algorithm handle discrepancies in values by resolving to the DC to which changes were written last (that is, "the last writer wins"), while discarding the changes in all other DCs. Although this resolution method may be acceptable in some cases, there are times when conflicts are just too difficult to resolve using the "last writer wins" approach. In such cases, it is best to prevent the conflict from occurring rather than to try to resolve it after the fact.


Windows 2000 Single-Master Model




To prevent conflicting updates in Windows 2000, the Active Directory performs updates to certain objects in a single-master fashion. In a single-master model, only one DC in the entire directory is allowed to process updates. This is similar to the role given to a primary domain controller (PDC) in earlier versions of Windows (such as Microsoft Windows NT 3.51 and 4.0), in which the PDC is responsible for processing all updates in a given domain.

The Windows 2000 Active Directory extends the single-master model found in earlier versions of Windows to include multiple roles, and the ability to transfer roles to any domain controller (DC) in the enterprise. Because an Active Directory role is not bound to a single DC, it is referred to as a Flexible Single Master Operation (FSMO) role. Currently in Windows 2000 there are five FSMO roles:


1) Schema master
2) Domain naming master
3) RID master
4) PDC emulator
5) Infrastructure daemon 


 Curtsey: Microsoft KB


 

Friday, July 6, 2007

How Java Cryptography Extension works - Password Based encryption Concept


In my last post I discussed about the basic API's to be used for encrypting and decrypting the data. For encrypting/decrypting data you need to have a key, in that case key needs to be stored in the system. In the password based encryption you provide the key at the encryption/decryption time manually because you remeber the key. The more complicated the password is more powerful the encryption will be. It can not be as strong against the attack as the one we generate using the API's but it can be good for encrypting data which you are going to decrypt at other end since you know the password.




Note: Password is first hashed to be used as a key not directly used as a plain text.







Ex: Key generated by API for 3DES encryption will have 2^168 many possibilities. While a normal person password length is 6-8 characters which comes to 26^6 or 26^8. I have taken 26 because there are 26 characters in the english alphabet, if you add numbers and special character it will be little higher but still no where close to the key generated by API.

To overcome the problem I mentioned above there are options to increase the security of the password based key

1) Salt: This is the extra random bits added to the password for generating key to encrypt/decrypt the data. These random bits are appended (base64 encoded) with the encrypted data in plain text so it can be used again for decryption. Each time data is encrypted new Salt is added for flavour.



2) Iteration Count
The iteration count is an attempt to increase the time that an attacker will have to spend to test possible passwords. If we have an iteration count of a thousand, we need to hash the password a thousand times, which is a thousand times more computationally expensive than doing it just once. So now our attacker will have to spend 1000 times more computational resources to crack our password-based encryption.



BASE64 Encoding


Binary data is typically stored in bytes of 8-bits. Standard ASCII is only 7 bits though, so if we want to display binary as ASCII, we're going to lose at least one bit per byte. BASE64 encoding is a way of overcoming this problem. 8-bit bytes are converted to 6-bit chunks and then into characters. Six bits are used so that some control characters can be used indicating when the data ends. The encoded characters can then be displayed on the screen and converted back into binary with no difficulty. Of course, since we're moving from an 8-bit chunk to a 6-bit chunk, we're going to have more chunks - 3 bytes becomes 4 characters and vice-versa.





Encryption using PBE






Decryption using PBE







I will give the example of PBE in another post.


I took part of the information in this post from http://javaboutique.internet.com/resources/books/JavaSec/javasec2_2.html







Thursday, July 5, 2007

How Java Cryptography Extension works - Encryption and Decryption???


Java Cryptographic Extension is a very huge topic and I am not going to write a complete book here which makes my life miserable and readers also get bored reading chapters after chapters. In this post I am going to just discuss how Encryption and Decryption works using JCE API's and then give one working example. I will post other JCE features in coming posts.

Symmetric encryption

I know most of you are aware of how symmetric encryption works and what are the benifits/downsides of this but for whom this is a new topic it's my responsibility to give little bit of idea. This encryption method has a key which is shared by both parties (Encryper and Decryptor). It is much much faster compare to assymmetric encryption but exchanging key between both the parties is a complex task. This is used where bulk of data needs to be encrypted and decrypted. Even where ever assymmetric encryption is involved it is used for actual data encryption because assymmetric also is used to exchange symmetric key between parties. ( Please refer to my post How SSL works) .

I don't want to take more of your precious time and get back to real business here

Main cryptography classes used in this article comes from javax.crypto.* package.

Most of the classes in the JCE use factory methods instead of new operator for creating class.
Cipher class is the engine of the car and following are 4 wheels on which you can enjoy the ride on the car.

Wheel 1 : getInstance()

Make a call to the class's getInstance() method, with the name of the algorithm and some additional parameters like so:

Cipher cipher = Cipher.getInstance("DESede/ECB/PKCS5Padding");

The first parameter is the name of the algorithm, in this case, "DESede". The second is the mode the cipher should use, "ECB", which stands for Electronic Code Book. The third parameter is the padding, specified with "PKCS5Padding". In case second and third parameters are not mentioned they will be taken as per the JCE provider specification.

Wheel 2 : init()

Once an instance of Cipher is obtained, it must be initialized with the init() method. This declares the operating mode, which should be either ENCRYPT_MODE or DECRYPT_MODE, WRAP_MODE, UNWRAP_MODE and also passes the cipher a key (java.security.Key, described later). Assuming we had a key declared, initialized, and stored in the variable myKey, we could initialize a cipher for encryption with the following line of code:

cipher.init(Cipher.ENCRYPT_MODE, myKey);


Wheel 3 : update()

In order to actually encrypt or decrypt anything, we need to pass it to the cipher in the form of a byte array. If the data is in the form of anything other than a byte array, it needs to be converted. If we have a string called encryptme and we want to encrypt it with the cipher we've initialized above, we can do so with the following two lines of code:

byte[] plaintext = encryptme.getBytes("UTF8");
byte[] ciphertext = cipher.update(plaintext);

Ciphers typically buffer their output. If the input is large enough that it produces some ciphertext, it will be returned as a byte array. If the buffer has not been filled, then null will be returned. Note that in order to get bytes from a string, we should specify the encoding method. In most cases, it will be UTF8.

Wheel 4 : doFinal()

Now we can actually get the encrypted data from the cipher. doFinal() will produce a byte array, which is the encrypted data.

byte[] ciphertext = cipher.doFinal();

A number of the methods we've talked about can be overloaded with different arguments, like start and end indices for the byte arrays passed in.


As we need to have a key which we will be using to encrypt/decrypt the stuff, let's discuss a bit about java.security.key interface (NOTE this is an interface).

We will be creating instance of this interface by it's implementer classes like javax.crypto.KeyGenerator or java.security.KeyFactory

Key interface has three methods

1) getInstance()
Below example will generate DES key.
KeyGenerator keyGenerator = KeyGenerator.getInstance("DESede");

2) init()
Below code will generate 3DES key which is always 168 bits.
keyGenerator.init(168);

3) generateKey()

Finally we get the key using this method.
Key myKey = keyGenerator.generateKey();

Now since we have all the necessary things to build a house lets construct it. Just think before you start constructing it. Which brick will fit at what spot to make it a perfect house.

1) We need a key which we will be using for encryption and decryption.
2) We need to instantiate Cypher class using factory method to do the actual job.



package com.kapil.util;

import java.security.Key;

import javax.crypto.Cipher;
import javax.crypto.KeyGenerator;

public class JESEncryptDecrypt
{
public static void main (String[] args)
throws Exception
{
if (args.length != 1) {
System.err.println("Please enter text to encrypt");
System.exit(1);
}
String text = args[0];

System.out.println("Generating a DESede (TripleDES) key...");

// Create a TripleDES key

KeyGenerator keyGenerator = KeyGenerator.getInstance("DESede");
keyGenerator.init(168); // need to initialize with the keysize
Key key = keyGenerator.generateKey();

System.out.println("Done generating the key.");

// Create a cipher using that key to initialize it

Cipher cipher = Cipher.getInstance("DESede/ECB/PKCS5Padding");
cipher.init(Cipher.ENCRYPT_MODE, key);

byte[] plaintext = text.getBytes("UTF8");

// Print out the bytes of the plaintext

System.out.println("\nPlaintext: ");

for (int i=0;i < plaintext.length;i++)
{
System.out.print(plaintext[i]+" ");
}
// Perform the actual encryption

byte[] ciphertext = cipher.doFinal(plaintext);

// Print out the ciphertext

System.out.println("\n\nCiphertext: ");
for (int i=0;i < ciphertext.length;i++) {
System.out.print(ciphertext[i]+" ");
}

// Re-initialize the cipher to decrypt mode

cipher.init(Cipher.DECRYPT_MODE, key);

// Perform the decryption

byte[] decryptedText = cipher.doFinal(ciphertext);

String output = new String(decryptedText,"UTF8");

System.out.println("\n\nDecrypted text: "+output);
}
}

How Caching works???


In todays web world many concurrent users access the web applications. Most of the web applications access the database (relational/hierarchical) in some way or the other to authenticate the user and validate the user rights on the web application. If web applications access the database each time user access the site then it will throw the user to go to competior because they are smart and implementing cache :-)

Web applications use caching to store the session information and authorization rights information for fast access. If web application does not manage the cache then it has to access the database to get the authorization information each time user access some link. This task is time as well as resource consuming.

When a object is retrieved for the first time from the database, instead of discarding the information it is storred in a buffer called cache. There are lot of complications storing the retrieved information in the cache:

1) Since caching is meant for faster retrieval if we keep adding newly retrieved contents from the database it will grow to unmanageable size. Smart people remove the unnecessary contents from the cache based on different algorithims. Some of them are mentioned below

a) Least Recently Used
b) Least Frequently Used

2) Implementing cache using algorithim which makes the fast retrieval/search possible.

3) Consider a scenario in which user has rights to access 10 links on a site initially and those rights are cached. Later on those access rights are modified (added more rights or removed some extra rights) from the admin console. If the cache is not updated on time user will not be able to access what S/he has right to access. To overcome this problem chaching should be updated at the same time when rights are modified.

4) Consider another scenario in which there are two servers behind the load balancer and cached objects are local to the servers (i.e cache is not shared between both the servers). If one of the server goes for a toss what will happen for the cached objects. To overcome this scenario chache manager needs to send notification of the newly added/removed cached object to other server's in the cluster.

5) Implementing max time for which a cached object be alive in the cache. After that max time is reached the object should be removed from the cache and recached based on next request from the user.

6) Caching parametes should be configurable so administrators can change them based on requirements.

Caching frameworks

Several object-caching frameworks (both open source and commercial implementations) provide distributed caching in servlet containers and application servers. A list of some of the currently available frameworks follows:

Open Source:
Java Caching System (JCS)
OSCache
Java Object Cache (JOCache)
Java Caching Service, an open source implementation of the JCache API (SourceForge.net) SwarmCache
JBossCache
IronEye Cache

Commercial:
SpiritCache (from SpiritSoft)
Coherence (Tangosol)
ObjectCache (ObjectStore)
Object Caching Service for Java (Oracle)



Technorati :

How reverse proxy works ???


You must have heard the term reverse proxy couple of times but wondered what the hack is this. I am going to give some idea about it in this post but before going to reverse proxy i would like to give an idea how forward proxy works.

Forward Proxy: A forward proxy acts as a gateway for a client's browser, sending HTTP requests on the client's behalf to the Internet. The proxy protects your inside network by hiding the actual client's IP address and using its own instead. When an outside HTTP server receives therequest, it sees the requestor's address as originating from the proxy server, not from theactual client. In the organizations you configure this in your browser setttings and most of the things happens behind the scene.

Reverse Proxy: Reverse proxy works when a request is sent to the organization web server from outside. It sits in front of the web server.
It acts as a gateway to an HTTP server or HTTP server farm by acting as the final IP
address for requests from the outside. The firewall works tightly with the Reverse Proxy to help
ensure that only the Reverse Proxy can access the HTTP servers hidden behind it. From the
outside client's point of view, the Reverse Proxy is the actual HTTP server.

Benefits of Reverse Proxy


  1. Clients now have a single point of access to your HTTP servers.

  2. You have a single point of control over who can access and to which HTTP servers you allow access.

  3. Easy replacement of backend servers or host name changes.

  4. Ability to assimilate various applications running on different Operating Systems behind a single facade.


Downside of Reverse Proxy



  1. If reverse proxy fails and there is no failover suppored then whole network access goes for a toss.

  2. If an attacker does compromise Reverse Proxy, the attacker may gain more insight into your
    HTTP server architecture; or if the HTTP servers it is hiding are inside the firewall, the attacker might be able to compromise your internal network.

  3. A lot of translations have to occur for the Reverse Proxy and the
    firewall to do its translations, so requests may be fulfilled a little more slowly.


Many web servers plug-ins are available which support the reverse proxy functionality. For example Apache module mod_proxy supports both forward and reverse proxy settings based on requirements.





Sunday, July 1, 2007

Static Vs Dynamic LDAP Groups


LDAP directory servers contain information about people: users, employees, customers, partners, and others. Many times, it makes sense to associate entries together in groups. A group is basically a collection of entries. These entries can be statically assigned to a group or can have a set of common attributes on which they can form a dynamic groups.


1) Static Group


A static group defines each member individually using the structural objectclass groupOfNames, groupOfUniqueNames, etc depending on Directory Server implementation. These objectclasses require the attribute member (or uniqueMember in the case of groupOfUniqueNames). These groups are good if the number of users in a group is not large because group contains an entry for each user who belong to this group. The more number of people assigned to the group more complicated the task to manage that group.


2) Dynamic Group


Dynamic groups allow you to use a LDAP URL to define a set of rules that match only for group members. For Dynamic Groups, the members do share a common attribute or set of attributes that are defined in the memberURL filter. These are good if the number of users in the group are very large. It's a much better choice for a dynamic group than a static group because the set of members will be automatically adjusted as new users are added, existing users are removed


Example :


dn: cn=Austin Users,ou=Groups,dc=example,dc=com
objectClass: top
objectClass: groupOfURLs
cn: Austin Users
memberURL:
ldap:///ou=People,dc=example,dc=com??sub?(&(l=Austin)(st=Texas))


In the above example all the users who belong to Location as Austin or State as Texas belongs to Austin Users.


Roles


Roles are a similar to groups, but work differently. Groups are effectively listings of members. In order to find out, for example, which groups "David" belongs to, you would need to look at every group and see if it contains "David". Roles, on the other hand, are associations that are stored in users' entries themselves.


As a member of a role, you have the authority to do what is needed for the role in order to accomplish a job. Unlike a group, a role comes with an implicit set of permissions. There is not a built-in assumption about what permissions are gained (or lost) by being a member of a group.








Technorati :

Hub and Switch and Router

I was doing a udemy course to learn more about the networking concepts and wanted to clarify the confusion between Hub, Switch and Router. ...