ProgramID

Share Embed


Descrição do Produto

Discrete Event Dynamic Systems: Theory and Applications, 14, 381±393, 2004 # 2004 Kluwer Academic Publishers. Manufactured in The Netherlands.

ProgramID YU-CHI HO Harvard University, Cambridge, MA, USA and CFINS, Tsinghua University, Beijing, China DAVID L. PEPYNE [email protected] Harvard University, Cambridge, MA, USA and University of Massachusetts, Dartmouth, USA QIANCHUAN ZHAO CFINS, Tsinghua University, Beijing, China HONG LIU University of Massachusetts, Dartmouth, MA, USA QIN YU University of Massachusetts, Dartmouth, MA, USA BRENT DUKES University of Massachusetts, Dartmouth, MA, USA Abstract. Although systems engineers have developed powerful tools for measuring, modeling, and optimizing system performance, system security is much less well understood. This paper discusses the issue of system security in the context of Internet security and introduces a simple idea called ProgramID. ProgramID is an example of a strategy based on a principle we call think globally, act locally (TGAL), a general principle for distributed, decentralized management of networks. Under the TGAL principle, a combination of simple security strategies acting at a local level can produce measurable increases in global security. ProgramID can be implemented via a simple service that users can add to their operating system to force programs to identify themselves before they can execute. This gives individual computer users an extra layer of protection against malicious programs such as the increasingly prevalent email viruses. Using epidemic-like models, we analyze how global security is impacted when some fraction of Internet users have ProgramID protection.

1.

Introduction

The Internet is among our most important and complex communication infrastructures. Given the role it has come to play in today's culture, it is dif®cult to remember that the Internet as we know it today really only began in the early `90s with the introduction of the Internet browser. With the browser came the Internet's explosive growth so that the Internet now spans the globe and supports thousands of applications never imagined with the complicated interfaces that preceded the browser. Today we have on-line commerce, auctions, chat groups, long distance phone service, banking, gaming, and the list goes on. This is a far cry from simple text only email and command line ®le transfer services of 20 years ago.

382

HO ET AL.

By all metrics, the Internet's performance in the face of the most rapid growth and evolution of any system ever built by mankind has been phenomenal. One of the properties of the Internet that made this possible is that the Internet is essentially a general-purpose information communication network interconnecting a population of general-purpose computers. The Internet's protocols simply make a best effort to transport information between computers with little concern for what the information represents. Likewise, the personal computers connected to the Internet make their best effort to interpret the information they receive. This philosophy imposes few restrictions on the ultimate use of the system, leading to simple, robust, and scalable protocols. The simplicity of the Internet's basic protocols works tremendously well when the information being transported is not malicious. However, their simplicity makes it very hard to detect and prevent malicious activities and security breaches. As the global population of users and the value of the services and information accessible through the Internet continue to increase, the number of security incidents also continues to increase. Virtually every Internet user has by now received an email virus, and it is estimated that a majority of personal computers connected to the Internet have been infected by some kind of malicious ``spyware'' that runs silently in the background recording a user's activities (e.g., passwords) and broadcasting them over the Internet to some remote stranger. For the last few years, security has dominated Internet related research. In contrast to the well developed ways to model, measure, and optimize Internet performance metrics such as latency, capacity, loss rate, and so on, developing models and metrics for Internet security is proving much more dif®cult. One reason may be that, unlike performance measures, which tend to be objective, security tends to have a subjective interpretation. Formally, computer network security is generally de®ned to include (White et al., 1996; P¯eeger and P¯eeger, 2003): Con®dentiality: Information in a computer, as well as information transmitted between computers, should be revealed only to authorized individuals. Integrity: Information in a computer, as well as information transmitted between computers, should be free from unauthorized modi®cation or deletion. Availability: Authorized users of a computer or computer network should not be denied access when access is desired. Authorized Use: Only authorized individuals should be allowed to use a computer system and then only in a prescribed manner. Message Authentication: One should be able to be sure that the individual who the system claims sent a message did indeed transmit it. Clearly each individual Internet user will have different thresholds for each of these security concepts. Moreover these thresholds will generally change with the service and information being considered. You may not care if a stranger steals a copy of a ®le of your favorite dessert recipes, but you would probably not want a stranger to get access to your tax records.

PROGRAMID

1.1.

383

Social Metaphors for Internet Security

Some Internet security issues can be thought of in social terms. In particular, imagine a hypothetical world with the following characteristics: Unlimited Anonymity: Someone may enter your home, but it may be very dif®cult for you to detect that they are there. It is very easy for people to hide evidence of their activities. Moreover, if you ®nd evidence that someone is inside your home, it can be very hard to identify who that person is and exactly what they are doing there. Those with enough sophistication can hide their identity by lying, cheating, masquerading, or impersonating others. Unlimited Mobility: People are free to enter your home almost at will. You may lock your doors, you may post guards, but there always seems to be open windows through which people get in. Implied Authority: Once a person enters your home, they have the same authority that you have. They can do anything you can do, they can do it without your permission, and they can often do it without your having any knowledge of what they have done. Little Accountability: There are certain people such as maids, au pairs, plumbers, electricians, etc., who we invite into our homes because we need or enjoy the services they provide. In our hypothetical world, however, these people require us to sign a waiver that absolves them of responsibility for whatever damage they may cause. Little Punishment: People who attempt to break into a home are not punished. People who actually do break into a home are punished only if they can be caught, which happens only rarely because it is dif®cult to detect and identify them (see unlimited anonymity above). It is not hard to imagine that you might feel somewhat insecure in such a world. Yet this is a reasonably accurate description of the world that exists on the Internet. Just make the following substitutions: Homes ˆ our personal computers; people ˆ software programs; locked doors ˆ login procedures; guards ˆ virus protection programs, ®rewalls, intrusion and misuse detection systems; windows ˆ executable email attachments, web pages with executable code, unsecured wireless data links, improperly con®gured systems. Speci®cally, implied authority refers to the fact that most personal computers use an operating system that executes any program that requests execution, just as if you manually launched it yourself. Once a program is running it has access to all the data stored on the computer and is able to modify, delete, or reveal it to a stranger. Unlimited anonymity regards the fact that the operating systems used by most personal computers make it dif®cult to know what programs might be running on the computer at any given time and what those programs are doing. Thus, you may never know that your computer is being compromised. Unlimited mobility is related to the fact that when a personal computer is connected to the Internet there are many different ways a malicious program can be downloaded and executed, including executable email attachments, scripts on web pages,

384

HO ET AL.

insecure wireless data links and so forth. Little accountability refers to current software licenses that make security an end user's concern and not the responsibility of the software manufacturer or vendor. Little punishment refers to the current dif®culty in catching and prosecuting the people who write and release malicious viruses, worms, and other Internet based attacks. These people are so dif®cult to catch precisely because the current design of the Internet gives so many ways for expert crackers to maintain anonymity. 1.2.

Think Globally, Act Locally (TGAL)

Many Internet security solutions have been developed or proposed. Some of these are decentralized while others require centralized coordination. Decentralized solutions include the well-known virus protection programs and personal ®rewalls. The centralized solutions tend to focus speci®cally on the ®ve points made above. For example, to make the Internet less anonymous new Internet protocols to facilitate monitoring, tracking, tracing, and surveillance have been explored. Little punishment has led to debates for special laws to make investigation and prosecution of cyber-crime more effective. Software product liability laws have been suggested as a way to hold Internet service providers and software manufacturers accountable for security weaknesses. Given the sheer scale and global reach of the Internet, centralized solutions would seem to be extraordinarily dif®cult, if not impossible, to implement and evaluate for technical as well as political reasons. For the Internet, the root design philosophy has been one of decentralization (Clark, 1988). For Internet security, decentralized solutions would also seem to be the most feasible. First, because security is largely subjective, security solutions should likely be individualized. Second, since security often has a negative impact on performance, individuals should be allowed to make their own individual performance vs. security tradeoffs. In this paper, we explore one such decentralized security solution. Our solution, which we call ProgramID, is a practical and simple solution inspired by social mechanisms that we use in the real world to identify each other. ProgramID is based on the principle of think globally, act locally (TGAL). The idea behind TGAL is to design local solutions with a global objective in mind, that is, such that if a certain fraction of users adopt the solution then a certain measurable level of global security can be achieved. While we do not claim ProgramID can solve all Internet security problems, it does serve a complementary role to other security solutions, thereby providing another layer of security in any defense-in-depth security strategy. The remainder of the paper is organized as follows. The next section presents the concept of ProgramID and describes how it can be implemented in the Windows environment. This is followed by an analysis of the effectiveness of decentralized local schemes such as ProgramID. The paper closes with a brief discussion. 2.

ProgramID

In the real-world we often hire people to work in our homes as cleaning people, maids, au pairs, personal assistants, and so on. Once these people are inside our homes, they can do almost anything we can do there putting us at risk for theft and other security exploits.

PROGRAMID

385

A common social convention is that people should explicitly request permission before they enter our homes. When a person requests permission, it is up to us to take the time to do various checks on these people to ®nd out who they are by verifying their credentials against independent sources. Such identi®cation is only necessary the ®rst time a person enters our home. On subsequent occasions, all we need to do is make sure they are the same person we gave permission to at the ®rst meeting. In the real-world we do this identi®cation visually, by giving them key cards, ID badges, usernames and PINs, etc. While these mechanisms are not perfect, they work reasonably well and provide a reasonable level of global security within our society. A similar mechanism could easily be implemented on our personal computers. A simple but reasonable rule would be to require that any program that seeks to execute ®rst explicitly request permission to do so. Once a program has been given initial permission to execute, we should then have some way to identify the program on subsequent sessions and to check that the program has not been maliciously or otherwise modi®ed. In his paper about ``good viruses'' Bontchev (1990) suggests that self-replicating programs (e.g., those that spread through a network performing automatic maintenance) should ask permission before they begin spreading. We feel that it could be useful to require this of every program. Next we describe ProgramID as a simple mechanism for implementing this idea. The general idea of ProgramID is the following. The ®rst time a program attempts to run, the program is stopped from doing so and the user is prompted to give the program permission to run. At this time the user is expected to use independent means to determine where the program came from and what the program does. If permission is denied, the program is not allowed to run. If permission is granted, a checksum is taken of the program ®le and stored in a secure database. Subsequently when the program attempts to run, the database is checked to be sure that the program has been given permission and its checksum is checked to ensure that the program has not been modi®ed since the time it was given permission. In this way, every program is required to explicitly request permission to execute, and properties of the program (its checksum) are recorded so that it can be identi®ed on subsequent encounters. Thus, to use ProgramID a typical user would do the following: 1.

The user installs ProgramID on their personal computer.

2.

The next time any application is used, be it WORD, EXCEL, PHOTOSHOP, or whatever, ProgramID will stop the application and ask the user for permission to launch the application.

3.

If the user gives permission, this fact is recorded in a secure database along with the program's signature (e.g., checksum of the program ®le).

4.

Steps 2 and 3 are repeated for every application that the user regularly uses.

5.

Once these applications have been given permission, no further interruption will occur during future use of these applications, unless ProgramID detects that the

386

HO ET AL.

program has been modi®ed (i.e., the program's checksum signature does not match the original). Unbeknownst to the user, there may be spyware which was unknowingly imported into the PC by downloading ®les from the Internet. Their presence will now be detected whenever they try to operate. In this way, the existence of spyware will also be made apparent to the user. Any resident virus that attempts to run will also be detected.

2.1.

Windows Implementation

We have implemented ProgramID for the Windows NT/2000 environment. As shown in Figure 1, the implementation consists of the following components: *

Process ObserverÐcoded as a Windows Kernel Mode Driver, the Process Observer captures any process as it is loaded and noti®es the ProgramID Daemon with the process's identi®er.

*

DatabaseÐimplemented in MySQL database, the Database records the vital information about the applications that the user has authorized.

*

ProgramID DaemonÐis a process running in the background as a Windows Service and has four sub-modules. *

Processing CenterÐthe Processing Center sub-module gets information about the captured process and kills the captured process immediately if it is not in the ProgramID database.

*

Signature GeneratorÐif the captured process is in the ProgramID database, the Signature Generator sub-module calculates the MD5 checksum of the executable ®le corresponding to the captured process and compares this checksum against the previously stored checksum for the ®le in the database. If the MD5 checksum does not match, the captured process is immediately killed.

*

Database ConnectorÐthe database connector provides the interface between the Processing Center and the ProgramID database.

*

User InterfaceÐthe user interface informs the user about unauthorized and modi®ed programs and prompts the user for an appropriate course of action. Speci®cally, if the ProgramID database has no record about a program, the program must be new to the computer system, and the User Interface module prompts the user for authorization to execute the program. Alternatively, if the checksum does not match the value stored in the ProgramID database, the ®le must have been changed after its initial authorization, in which case the User Interface will prompt the user

PROGRAMID

387

Figure 1. Windows implementation of ProgramID.

for a course of action again. A ®le that passes all authorization tests is invoked as a child process of the ProgramID Daemon. ProgramID is added to the NT/2000 kernel in a similar way that a printer driver is added. So that all ProgramID components are loaded at boot time, we made the entire ProgramID package into a Windows Service. Thus, a user only needs to use administrator privileges once to install the service. The ProgramID service will then be available automatically after Windows is restarted, regardless of who is using the computer.

2.2.

Module Speci®cation

The functionality of each ProgramID module is speci®ed in the following. *

Process Observer NT/2000 provides a set of APIs, known as ``Process Structure Routines'', exported by NTOSKRNL. One of these APIs, PsSetCreateProcessNotifyRoutine(), offers the ability to register the system-wide callback functions which are called by the operating system each time a new process starts, exits or is terminated. Based on a process-monitoring example in (Ivanov) implemented the Process Observer. The Process Observer works as an NT Kernel Mode Driver and noti®es our ProgramID Daemon when a new process begins execution. It then passes the required process information, including the process ID and its parent to the Processing Center module.

388

HO ET AL.

Figure 2. Sample data in the database. *

Database Connector The Database Connector module manages the ProgramID database. In this implementation, we choose MySQL for the database. Each record in the database contains a unique ID for each program, the ®lename with the full path to the program, and the MD5 checksum of the program ®le. Figure 2 shows some sample data in the database during our experiment.

*

Signature Generator The Signature Generator module uses the MD5 algorithm to generate the signature of the target ®le in this implementation. MD5 may be replaced by some other algorithm to trade off the security and performance.

*

User Interface A User Interface dialogue is displayed when, an unauthorized application attempts to run (see Figure 3), or an application that has been modi®ed since it was authorized attempts to run (see Figure 4).

Figure 3. Unauthorized application dialog.

PROGRAMID

389

Figure 4. Modi®ed application dialog.

The User Interface allows a user to select an appropriate action, for example, kill the application, let it run once, or add the application to the database (update the entry if it was modi®ed). 2.3.

Experiments

We conducted ®ve experiments, four to test the functionality of ProgramID and the last to attack ProgramID by attempting to kill the ProgramID process. 1.

We installed ProgramID and then attempted to run ``notepad.exe''. Notepad did not run. Instead, the Unauthorized Application Dialog (Figure 3) was displayed. We selected the second option to authorize it and then the Notepad started execution. We closed the Notepad and tried to run it again. This time, the Notepad started execution smoothly.

2.

We closed Notepad, changed one byte in the ``notepad.exe'' ®le with a binary editor and ran it again. ProgramID prevented Notepad from running, and displayed the Modi®ed Application Dialog (Figure 4).

3.

We pressed ctrl ‡ alt ‡ delete, which on Windows invokes the Task Manager (taskmgr.exe). The Unauthorized Application Dialog was displayed prompting us to authorize the Task Manager.

4.

We received an email with an attachment ``calc.exe''. We double clicked the attachment in the email client (Outlook) and the Unauthorized Application Dialog was displayed.

390 5.

2.4.

HO ET AL.

We made an application that did nothing but attempt to kill the ProgramID Daemon process. When we ran the application, the ProgramID daemon killed the application and displayed the Unauthorized Application Dialog.

Implementation Issues

With our Windows implementation of ProgramID, all executable programs that run as a Windows process can be monitored and prevented from executing, no matter where they are located. Once a process is allowed to run, however, ProgramID has no control over what the program can do. For example, if a user inadvertently gives a malicious program permission to run, it can do harm. In addition, if a program like an Internet Web Browser is given permission to run, it can do harm if it opens a webpage that contains malicious scripts. Of course, we do not advocate that ProgramID should be the only security service in use on a personal computer, but as a complement to other services such as virus protection and ®rewalls, which would provide additional ways to detect and prevent known malicious processes and scripts from executing. As with other security services, ProgramID itself is subject to being attacked. The security of our ProgramID implementation relies on the digital signature algorithm and the security of the ProgramID database. These are issues that can largely be solved using existing techniques. Another major concern comes from the method we use for killing processes. Some viruses, including W32.HLLW.Kickin.A@mm, W32.Beagle.F@mm, and W32.HLLW.Fizzer@mm, attempt to identify and terminate any antivirus and ®rewall processes they ®nd running on the computer they are attacking (http://securityresponse. symantec.com/). So who can kill the opponent ®rst still remains a challenge, although our ProgramID implementation always won the battle in our experiments. A proposed way to work around is to choose a user speci®ed name or even a random name for the ProgramID Daemon application during installation. Thus ProgramID could no longer be targeted via its process name. Finally, before ProgramID could be released to the public, there are a host of usability issues that we would need to address. For example, the issues of disabling the service and installing upgrades to the service. In our exploratory implementation of ProgramID, we have given the user the ability to enable and disable its services directly from the ProgramID user interface. Regarding upgrades or reinstallation, our current implementation requires that the user ®rst uninstall the old version before reinstalling it or before installing a newer version. 3.

Analysis

ProgramID is an example of a principle we call TGAL. TGAL is a basic principle for distributed, decentralized system managementÐbehavior at the local level should have the global system level objective in mind. An illustration of the TGAL principle in the

PROGRAMID

391

context of ProgramID might involve a malicious email virus spreading through the Internet. Email viruses generally claim to come from someone that you know. They also usually claim to be something that they are not, for example, an image or some sort of important ®le. For the unfortunate user who happens to open the attachment, the email virus will execute and do its damage. Among the actions an email virus takes, it will email itself to other unsuspecting Internet users, spreading to cause global disruption. For a malicious program that is actively trying to masquerade as something that it is not, ProgramID provides an extra level of defense by forcing the program to seek permission before it can execute. A user might then realize that the attachment is suspicious, and assuming they do not give it permission, the user becomes effectively immune to that virus. Models similar to those from the epidemic literature can be used to study how the number of vulnerable computers protected with ProgramID would impact the resulting global size of the epidemic. Next we describe some simple experiments to illustrate ProgramID and the TGAL principle. 3.1.

Analysis Setup

If used properly, ProgramID can make a personal computer user immune to most current email viruses. For example, suppose there is an email virus going around that is trapped by ProgramID. Suppose further that our computer users who have ProgramID installed are in the habit of never authorizing any program except for those that they use on a daily basis. In this case, even if the user's email client attempts to run a virus attachment, ProgramID will stop it, the user will abort it, and the user's computer will not become infected by it. The virus will then not be able to spread from that user's computer to any others. Making a certain fraction users on a network immune to such an email attack will thus affect the global spread of the virus, which we will measure by the expected number of computers that become infected with the email virus. Using Matlab we explored the expected attack size as a function of the fraction of randomly selected users protected with ProgramID and the topology of the network over which the email virus is spreading. We modeled the spread of an email virus as follows. For each copy of the email virus received, users not using ProgramID will open it and become infected by it with probability 0.25. Upon becoming infected, a node will make a single attempt to spread the virus to every other email address in the node's email address book. We assume that nodes with ProgramID are immune to the email virus and cannot become infected by it. Thus, the email virus cannot spread through a computer that has ProgramID installed. For our experiments we randomly started the email attack by infecting a single randomly selected node and we let the virus spread until no more nodes became infected. We counted the total number of nodes that the virus was able to infect before it stopped and took the average over 1,000 such experiments. The experiments were performed assuming two different network topologies: a fully connected graph topology and a small world graph topology. These topologies describe relationships between email users. There is a link between two users if they have each other in their respective address books (we assume an undirected graph, that is, if I have you in my address book, then you have me in

392

HO ET AL.

Figure 5. Expected size of email attack as a function of % of users with ProgramID.

yours). The fully connected case therefore assumes that every user has every other user's email address in their address books. This of course is unrealistic, but serves as a baseline case. Studies have suggested that a small world topology is a more realistic topology for the social relationships that email address books de®ne (Watts and Strogatz, 1998). We constructed our small world graph using the method of Newman and Watts (1999) with six nearest neighbors per node and a disorder parameter of 0.05. For graphs with 500 nodes our results are given in Figure 5. As can be seen from Figure 5, the expected attack size is linear for a fully connected graph (roughly equal to 100 % of immune nodes). The small world topology is naturally somewhat more robust to spreading attacks, but the value of immunization is still quite clear. 4.

Discussion and Conclusions

The systems sciences have developed many successful methods for measuring, modeling, and optimizing performance. Much more dif®cult is measuring, modeling, and optimizing security. For information systems, such as the Internet, one reason may be that security is a subjective quantity that is dif®cult to de®ne mathematically and objectively. This leads us to propose that many aspects of security should be handled at an individual, local level. This allows individuals to tailor their security schemes to their own de®nition of security. In addition, if individual incentives are designed properly, sel®sh actions on the part of individuals can have bene®cial group effects.

PROGRAMID

393

We described ProgramID as a simple security service that users can install to control what processes can execute on their computers. Conceptually, ProgramID is like a login routine for programs, forcing programs to identify themselves before they can execute. While a tool like ProgramID cannot stop all attacks, it provides another layer in a comprehensive defense-in-depth security toolkit. In some sense, ProgramID acts like the complement of a virus protection program. A virus protection program detects programs that are already known to be bad and stops them from executing. ProgramID, in contrast, stops all programs from running, unless the user has explicitly given them permission to run. A user is therefore strongly advised not to give permission to a program unless the user knows the program is not malicious. How a user can go about doing that for programs downloaded from the Internet is likely to involve ad hoc methods depending on the source of the program (e.g., email attachments vs. software downloads from commercial websites). ProgramID is an example of a principle we call TGAL. Epidemic models tell us how immunizing various randomly selected or specially targeted members of a population can reduce the overall expected epidemic size. In this way, local actions working in a decentralized way can lead to bene®cial global outcomes. Moreover, when there are many participants, we argue that the strategies used at the local level can sometimes be quite simple (Ho and Pepyne, 2004). ProgramID is one example of such a simple strategy. Acknowledgments Funding for this work was provided by the U.S. Army Research Of®ce (contract DAA1901-1-0610) and the U.S. Air Force Of®ce of Scienti®c Research (contract F49620-01-10288). References Bontchev, V. Are ``Good'' computer viruses still a bad idea. http://securityresponse.symantec.com/ Clark, D. The design philosophy of the DARPA internet protocols. Proceedings of ACM SIGCOMM 1988, Stanford, CA, August 1988, vol. 18 No. 4. http://www.acm.org/sigcomm/ccr/archive/1995/jan95/ccr-9501clark.html Ho, Y.-C., and Pepyne, D. L. A conceptual framework for optimization and distributed intelligence. To appear Proceedings of the 43rd IEEE Conference on Decision and Control, December 2004. Ivanov, I. Detecting Windows NT/2K process execution. http://www.codeproject.com/threads/procmon.asp Newman, M. E. J., and Watts, D. J. 1999. Scaling and percolation in the small world network model. Physical Review E 60: 7332±7342. P¯eeger, C. P., and P¯eeger, S. H. 2003. Security in Computing, 3rd edition. Prentice Hall. Watts, D. J., and Strogatz, S. H. 1998. Collective dynamics of small world networks. Nature 393: 440±442. White, G. B., Fisch, E. A., and Pooch, U. W. 1996. Computer System and Network Security. CRC Press.

View publication stats

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.