Category Archives: Academic Works

Symbol and Static

This is a review of KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs by Cristian Cadar, Daniel Dunbar, and Dawson Engler and Finding Security Vulnerabilities in Java Applications with Static Analysis by V. Benjamin Livshits and Monica S. Lam for my CS 253 Computer Security class.

This is the first time that I have read thoroughly on the topic of code analysis. Usually, if I want to find an error in a program, I just use trial and error method. Enter some data and let’s see what will happen. I never bothered running the code through some code analyzer. So basically, I catch an error when the program is already running. It works for me so far but I think, I should start catching errors at the code level. Of course, I’m not saying that I’m coding dirty or carelessly. I always try to catch all possible sources of errors when coding. It’s just that an additional level of error catching doesn’t hurt.

I generally agree with the two papers. It’s even amazing that they found bugs using their methods and have them fixed almost immediately. However, just like in the previous papers, I felt like reading through some parts of a product commercial again. I’m beginning to dislike how some papers describe their works as far superior than others without in-depth comparison and in some cases, without even describing their works in full details. What I understood in KLEE is how to use it by passing command line arguments but I’m still confused on its inner workings or how it produces its output. Basically, I know the input and the output but not the process that connects them. I might have missed it in the paper though.

The Static Analysis paper, on the other hand, offers a more complete view of how it works. But I still found traces of self-promotion in it. I don’t know if this kind of promotion is just a side-effect of writing a paper, I’m just being picky, or really intended. Just like in the other paper, I haven’t fully understood how static analysis works. Part of my confusion, I guess, are the terms used like tainted objects, etc. I’m also not really familiar with this kind of approach in computer security although the causes of vulnerabilities mentioned are well-understood so I get static analysis more than symbolic execution.

In summary, I’m grateful to the two papers because they introduce me to topics that are relatively new to me or topics which I haven’t really explored yet. I think these are also the reasons why I have only a few things to say with regards to the contents of the papers. The two papers also inspire me to find or develop an analysis tool similar to what they discussed that I can apply to software projects that I’m working on. And rest assured that I will not promote it in a paper. Just joking.

Confusing Security

This is a review of Setuid Demystified by Hao Chen, David Wagner, and Drew Dean and Understanding Android Security by William Enck, Machigar Ongtang, and Patrick McDaniel for my CS 253 Computer Security class.

I used to think that security is simpler than what most people say and that I somehow understand it. But after reading the two papers and the way they presented the complicated parts of security, I once again proved how naive I am.

At first glance, setuid should not be that complicated since there is a specification for it. But Unix systems diversified and different implementations have risen. As a result, developers have many choices for setuid and these lead to bugs and security vulnerabilities in programs as seen in the Setuid Demystified paper. The paper shows some programming errors caused by wrong assumptions of the usage of setuid.

The Setuid Demystified paper is a good read. It’s a bit long but it contains valuable information. I like the way it is organized. Some introduction and history of user IDs are given which helps the first time reader. Then a deep discussion of setuid, its many variations, and its comparisons between different systems followed. This is where I got lost. I also realized that one should be very careful when manipulating user IDs. There are some cases when a code looks safe but a setuid exploit can be found. Covering for all cases makes security complex.

I’m not also a fan of formal models but the paper showed that it can be useful for covering all cases. And with all cases covered, security is strong. I agree with the findings of the paper. I think controlling all three user IDs is the way to go. Explicitly setting all their values will force the developer to not assume anything. Although, care must be taken since too much control can compromise a system. Unfortunately, existing systems are already in placed and replacing them all with the proposal above can produce unexpected results. So, I think the proposed API of the paper should be considered for use.

And now, let’s head to Android. Peep peep. I have zero experience in anything Android-related. In fact, the first time that I have seen an Android phone is last Saturday. And I barely touched it. Haha. I also don’t encounter many articles about Android but admittedly, I don’t browse tech sites as much as I used to before.

After reading the Understanding Android Security paper, I have found out the many and complicated security features of Android. I think what happened is as Android is being developed and security vulnerabilities are being found, the developers are patching things up to close the possible security holes. As patches became many, it became harder to keep track of all the security features of Android. I think to simplify the said features, a rewrite, with solutions to security problems found in earlier versions being the top priority, must be undertaken.


This is a review of Buffer Overflows: Attacks and Defenses for the Vulnerability of the Decade by Crispin Cowan, Perry Wagle, Calton Pu, Steve Beattie, and Jonathan Walpole for my CS 253 Computer Security class.

I only started really playing with computers when I’m in my first year of BSCS in UP. That is late compared to my classmates and batchmates who had programming experiences when they were in high school. Some even had experiences in grade school. Nevertheless, I started catching up. I think my progress coincided with my shift to a full-time Linux user since I only knew PC games and web browsing back when I’m using Windows during my high school days. I didn’t even know what programming is.

Admittedly, Linux back then was not as simple as today. Today, you don’t even have to configure the X server anymore where as back then, you have to combine research skills and luck to get your system fully configured. Needless to say, I have come across many things when I’m researching for solutions to Linux problems. One of those things is the topic of the paper in review which is buffer overflow. But since I’m naive then as I’m still now, I don’t even give a second look to the discussions surrounding buffer overflow. But now, thanks to CS 253, I think I have a better understanding of what buffer overflow is and its exploitation and how to defend against it.

Buffer overflow, as I have read, is not very complicated. The main idea is to put some malicious code somewhere in memory or use an already existing code there and make sure that the return address points to it. This can be done by overflowing a buffer in a program. The hard part is guessing where the code lies in memory.

The paper, in accordance to its title, discusses the vulnerabilities of a buffer overflow then proceeds to show the different ways of attacking it and of course, the different ways of defending from exploits. However, I think I expected more because of its title. I thought a more in-depth discussion of attacks and defenses are written in the paper. Although the paper is clear and straightforward, it has a feel of a product promotion. Instead of discussing the topics surrounding its title more, it hurriedly summarizes the vulnerabilities, attacks, and defenses then went on promoting StackGuard and how it can solve all our needs.

Okay, that’s enough sarcasm. Hehe. StackGuard is great and the paper shows it but I don’t like the way it was presented. Furthermore, some parts of the paper are speculative and it is even pointed out there. Overall, the paper is a good read. It is not full of technical jargons that causes head spins and yet, it remains informative. Just look past the product promotion.


This is a review of a CRS Report for Congress entitled Botnets, Cybercrime, and Cyberterrorism: Vulnerabilities and Policy Issues for Congress by Clay Wilson for my CS 253 Computer Security class.

I have read somewhere before, I forgot where, that true hackers don’t like to use the term cyber to refer to anything related to the Internet. After reading the report, I think I know why.

The report is very informative. It exposes the reader to the different threats the US perceived with respect to computer security and the possible effects once the existing security has been breached. It even includes real-world examples of what the report called cybercrime and cyberterrorism. Some information in the report may seem exaggerated and cast the US as the victim of bad guys but I take it at face value. After all, I still don’t know what is really out there.

I don’t have much to say about the report since it is clear and paints the picture very well but I think I disagree with some of its parts. The first thing that got my attention is the way the term open-source was misused. The statement “Some studies show that authors of software for botnets are increasingly using modern, open-source techniques for software development, including the collaboration of multiple authors for the initial design, new releases to fix bugs in the malicious code, and development of software modules that make portions of the code reusable for newer versions of malicious software designed for different purposes.” on page 6 demonstrates this. Although the “collaboration of multiple authors”, “new releases to fix bugs”, and “development of software modules” are part of the Open Source model, one can argue that they are also part of any software development model. Looking at the cited source, I realize that of course, it is in the best interest of an anti-virus company to make Open Source look bad since people are touting it to be the solution for viruses and by extension, the end of anti-virus companies.

One other thing I disagree with is the introduction of so many terms with cyber attached with them. I find it confusing since the definitions are overlapping. I think it is simpler to use the words without cyber like “Why is he imprisoned? Oh, he commited a crime using computers.” And creating a new term for an old action screws the judgment of people that they are the same thing. A crime is still a crime regardless of the way it was done. This is petty but I think people does not care much with their computer security as much as with their physical security because acts done using computers are perceived differently compare to acts done without computers. Kids know it is not good to go inside another house without permission but they don’t know that it is also not good to go inside another computer without permission.

One more thing that I disagree with is the use of the term hacker to refer to bad people breaking in other people’s computers. This is a very old issue now in the Internet but I stand with what I believe that hackers are good people.

I may not have the statistics but I believe most computer users use softwares in their default states. In other words, they use whatever software comes bundled with their computer and they don’t bother to change the configurations for better security. I think this is one of the reasons why computers get compromised easily that is why I applaud the report that it brought up the education of computer users to Congress. In my weirdest state, I want people to acquire computer usage licenses before being allowed to use computers. This should bring down the number of infected computers.


This is a review of Reflections on Trusting Trust by Ken Thompson for my CS 253 Computer Security class.

As mentioned in the class, one aspect of computer security is a program’s source code itself. In the early days of computing, source codes have been freely distributed. People between universities and even commercial companies shared source codes and techniques. There were no rules and restrictions. But as computing became more comercialized, the cooperation between people was lost. Companies hired people and restricted them and locked up their source codes. All for their own interests. The freedom of cooperation is what the GNU Project intends to bring back.

It’s awesome to think that issues today were already discussed decades ago. A current issue that is related to the paper is the ongoing debate of whether a software with an open source code is more secure than a software that does not show its source. This is mostly triggered by the rise of Free and Open Source Softwares and the effort of companies to stem the tide. The argument for open source code is more people can see it which leads to early detection of bugs and consequently, early fixes of possible entry points of exploits. On the other hand, the argument for closed source code is since more people can see it, more people can also find exploits which leads to insecure programs.

As the paper found out, both arguments failed. Openness of the source code is not an issue if the accompanying binary is not produced from the debated source code. The paper demonstrated in three stages how one can infect a program without writing the bug in the source code. Thus, the non-issue of the openness of the code. In Stage I, a self-reproducing program was presented. This involved printing of the program itself. In Stage II, a self-learning program was presented. This is not a real self-learning program but rather it learns via a cycle of adding new feature, compiling, and installing the new binary. This is the key for producing a bugged program with clean source code. Finally, in Stage III, a combination of Stage I and Stage II was presented. The Stage I program was modified and the procedure of Stage II was applied to produced a binary with bugs. The resulting binary can then be used to reinsert the bugs even if the source code was clean. The paper is only short but a little confusing for me. But I think I got the point.

The cooperation of people from long ago is already back. An example is the community that surrounds different Linux distributions. And as we know, Linux distributions are collections of different programs which in turn have their source codes open to the public. One of the solution to the problem in the paper that I can see is the use of digital signatures. Usually, installation of programs in Linux distributions are done via built-in commands that pull packages from different repositories and in to the system. If the signatures of the packages downloaded are not the same as the signatures of the packages in the repositories, then something is wrong. But this time, a new issue will rise in the form of whether the packages in the repositories and their corresponding signatures are really correct or not. Another solution is the hashing of the packages. This is the same as the digital signatures but instead of signing packages, their hashes are recorded. The user can then verify if the hashes are the same.

Even with security measures in place, everything still boils down to the moral of the paper which states that “You can’t trust code that you did not totally create yourself.” The security measures are only good if users trust the one who put the measures in the first place. That’s why Hogwarts is so secure. Harry, the students, and the staff all trust Dumbledore.