A Study on Six Degrees of Separation Using Self-Adapting Algorithm

Shubham Kumar

A Study on Six Degrees of Separation Using Self-Adapting Algorithm

  • Author Shubham Kumar
  • Co-Author Shaik Naseera, Gayathri P, Santhi H, Gopichand G, Geraldine Bessie Amali D
  • DOI
Keywords : Social Network, connectivity, sig degree, separation, search, link


Six Degrees of Separation is a theory which states that any two random people in the world can be associated with each other with no more than six intermediate links. In this paper we use this theory to create an application which is going to help the user in expanding his professional network and make use of opportunities that he otherwise would not be able to. This is achieved through maintaining a central search server which contains the details of all the users using our application. The server continuously finds links between various users based on their activities and updates the link table if that link has not been established before. Doing so will result in a much smaller wait time for the clients as in most of the cases the link will already be present in the link table. The search time for links of 3rd degree are easy to establish and thus will be done by default for all clients limited to their friend list. This move will make it much easier to return client requests as most of the time they are limited to their friends circle. We have tried to make use of self-adapting algorithms in this paper which will be responsible for calculating such statistics like average degrees of separation, establishing links on its own for most active clients, updating 3rd degree links for every client and updating the link tables. The system will make use of a main server side compute algorithm where the main handling logic will run on a parent process or thread. As soon as any client connects to the server through socket connection a new child thread will be created by the server in order to serve the new client. The client will only send its client id and the target name. This data will then be used by the server to compute and find the link between the client and the target. As soon as the server finds the link the client thread will send this data to the client and display it to the user. After this the client thread will be killed and the connection with the client will be terminated. The main server thread will be responsible for accepting client requests, creation of new threads, background passive searching, updating the link table, and modifying the statistics. The details about the clients will be stored in a central database which will be accessed only by the server side algorithms.


No have any comment !
Leave a Comment