11 December 2013

Scientific Computing: Gateway to Human Psychology

A man experiences a virtual environment in the University of Illinois at Chicago.



Computers are powerful tools.  Computers are known for their mathematical proficiency, and have become a staple of engineering.  However, while computers can use advanced mathematics to model the laws of physics, chemical compositions, and manmade structures, computers can also help us learn more about something that is partially intangible—the human mind.  There are two sides to using computers to study the human mind.  There is the neuroscientific side, where the physical activities of the human brain are recorded and analysed, and there is the psychological side, where computers are used in experiments to study the behaviour of human subjects.

While neural computer science is outside of my own field of concentration, it still plays an important part in research.  Computers have reached a point where they can create a map of neurons of a person’s brain.  This is usually achieved by fitting a subject with a cap with neuron sensors, which then records the brain’s activity to a computer while the subjects undergoes an experiment.  From this information, the computer can generate a model of the subject’s brain.  This helps neuroscientists find out how different parts of the brain react to various stimuli.

Psychological computer science, on the other hand, is a part of computer science in which I am interested.  Psychologists can use computer science to study the mind of children by letting the children use interactive software, such as educational video games.  Psychologists can then observe what choices the children make.  In one study that I looked into a couple of years ago (and whose source I cannot link, because the academic journal is not publically viewable), a group of researchers visited an elementary school class and let them play an educational video game.  The researchers noticed certain behaviours, such as female students asking other classmates for help in solving the game’s puzzles, whereas male students tried to solve the puzzles on their own or use the in-game hint system.

Psychological computer science can also be used to study adults via virtual reality.  Dr Jeremy Bailenson, a Stanford University professor who studies virtual computer interaction, went into great detail on virtual reality during his 2011 lecture [1].  His virtual reality experiments range from having subjects walk across a virtual wooden plank, to changing the subject’s race or gender in the virtual world.

From brain scans to virtual identity experiments, computers have a place in the cognitive sciences, as they do in all sciences.

8 December 2013

Computer Graphics: Why Style Matters

a simple illustration of me looking at a C# program.


A little over a week ago, I had written a thank-you letter to a programmer whom I job shadowed, as part of an assignment for one of my university classes.  I wanted to make my letter a bit more personalized, and I wanted to try out some of the Adobe software that I had received earlier in the year.  During my job shadow, the programmers invited me to participate in a C# programming exercise, so I decided to draw in Adobe Illustrator a simple illustration of me in the C# session and include it in my letter.  The illustration itself is simplistic, due in no small part to my elementary drawing skills, but the illustration’s style does say something.  The illustration, for example, lacks colour, so that it does not distract from the rest of the letter.  In addition, while the image is not incredibly detailed, its cleanness is appropriate as a letter illustration and shows that I put in a moderate amount of effort into showing appreciation.

A lot of visuals are made with computers nowadays, and it is justified; from making illustrations or diagrams with simple shapes, to editing photographs, to animation, computers have shown themselves to be versatile.  For many media, using computer graphics is optional; film, for example, can just as likely be done in live actions or with hand-drawn animation, although in the case of live action, advanced visual effects are more likely to be computer generated to save money.  Video games, by virtue of being played on a computer (including game consoles) in the first place, almost always have computer-generated graphics.  This is where the versatility of computer graphics comes into play; computers have multiple ways of rendering visuals, ranging from realistic rendering to more cartoonish cell-shading.

With the amount of tools available, I sometimes see video games not make the most of them.  One of the more commonplace phenomena is known as “Real is Brown”.  A lot of modern action games, in an attempt to look realistic, use muted grey, brown, and beige colours.  One video from Extra Credits points out that in doing so however, the drab pallet conflicts with the action-oriented nature of these games and can potentially ruin the game’s intended tone, citing the forgettable 2008 video game Golden Axe: Beast Rider as an example [1].  Not every game needs to be as saturated as a cartoon, of course, and no amount of anti-aliasing, texture filtering, or shading will make up for poor aesthetics in a computer-generated visual.  Besides, the sheer amount of computer graphic effects is too much for one blog post, and in the end, computer graphics artists need to use their intuition to make things “look just right”.

1 December 2013

Communications: The Mystery of the Strict NAT

You may not be able to join certain Game session or communicate with other Players while playing. Average matchmaking wait time will be adversely affected.


After spending a pleasant Thanksgiving dinner with my extended family, my parents drove us back home. Exhausted from the outing and from the school projects that I had been working on earlier in the week, I wanted to settle in, lie back, and play video games so that I would be refreshed for next week. I set up my gaming system, loaded up the newest Assassin’s Creed, started my first multiplayer game session … and I was immediately disconnected. After being booted back to the game menu, I saw a worrying red circle that said, “NAT”.

According to the game, my NAT rating was “strict”. Because of this, I could not be matched with other players correctly. What did “NAT” mean, and why did this have to get in the way of enjoying my game? Well, I spent the entire rest of the night looking up networking guides and troubleshooting in agony to find out.

To make a long story short, I found out that the problem had to do with a new Internet modem that my father and I had recently installed. Our new modem had a built-in router that could wirelessly connect to computers, game systems, and other electronics. This built-in router had a firewall that automatically blocked certain ports. From what I understand, these ports are points in my home network that are used to send information to and from the Internet. The ports that my video game uses to connect with other players are some of the ports that the new firewall kept closed.

I eventually found out how to configure the new router and lower the security of the firewall so that the necessary ports were open and could transfer information completely. After doing so, the NAT circle in my game went green, and I was able to play without further interruptions.

Open NAT is the optimal setting. You will join Game sessions and communicate with other Players without experiencing any problems.

After thinking things over, it makes sense that the firewall behaved the way that it did. Most people only use their computers to browse the Web and do everyday tasks. For them, it would not make sense to keep open a port that only video games really use, since doing so can leave them vulnerable to hackers. In the end, learning about ports and how software, such as games, interacts with them reminds me of the networking security that some of us take for granted.

24 November 2013

Artificial Intelligence: The Future of Chatterbots

An automated online assistant uses AI to help users.

Imagine that you are calling the phone line of a major business.  More often than not, you will encounter an interactive voice responder.  Your experience with the voice responder may be a familiar one—a recording guides you throughout a menu and slowly reads out each option.  You press the button corresponding to the desired option, and the recording reads out the next menu, ad infinitum.  Many people find this process to be slow and tedious.  A lot of this perceived tedium comes from the fact that such menus are counter-intuitive to natural human communication; it is in our nature to communicate specific ideas in short, easily-understandable phrases.

In recent years, however, computer scientists have been looking into systems that can recognize and respond to natural human speech, albeit to a limited extent.  Such a system would be a product of artificial intelligence, a branch of computer science that studies how machines can process and respond to input.  Some current computer programs that recognize human language include chatterbots, software that simulates a conversation with a user, primarily for entertainment purposes.  One of the most popular chatterbots is Cleverbot, who constantly develops speech mannerisms from interacting with humans[1].  Some broadcasters on websites such as Youtube record themselves interacting with Cleverbot and other similar bots, often to humorous effect[2].


Using chatterbots for professional use is still limited, however.  Some chatterbot technologies such as SitePal[3] and AlterEgos[4] have been made for use by businesses, but they are rarely seen on other websites.  Perhaps good website layout can make such chatterbots unnecessary.  Still, possibly in the future, when services become used by more people and technical support staffs become overworked, artificial intelligence software will be able to supplement human assistance.

13 November 2013

Computer Science: It is more than programming, but the rest tends to be forgotten.

“In most people’s vocabularies, design means veneer.  It’s interior decorating.  It’s the fabric of the curtains of the sofa.  But to me, nothing could be further from the meaning of design.  Design is the fundamental soul of a human-made creation that ends up expressing itself in successive outer layers of the products or service.” –Steve Jobs




Every once in a while, when my mother goes out, she talks to other people about me. Often, she likes to boast about how I am a computer science university student. If I am with my mother, the response from the other person will almost invariably be saying, “Oh, so you do programming!” to me, implying that I wanted to be a programmer. While I do, in fact, do programming for my classwork, programming was not the main reason why I majored in computer science. I should not blame the person entirely, though, because his or her assumption was reasonable.

A lot of people come to Silicon Valley and the surrounding area to get jobs in computer programming, because often, it guarantees a high salary. Many employers of lucrative jobs, including computer programming, expect their applicants to have a bachelor’s degree[1]. Many universities, however, do not have a programme for only computer programming, instead directing students interested in programming to a degree in the broader field of computer science. This means that prospective computer programmers are grouped with people interested in other aspects of computers, both in the classroom and in the public conscience.

One discussion on Stack Overflow talks about the broadness of computer science[2]. Even though programming is perhaps the most sought-after computer position, each part of computer science is crucial, and programmers would not be as successful as they are without the help of computer scientists.

The foundation of computer science, for example, is theory. Computer theory is mathematical in nature. After all, when it comes down to the actions of the processor, each instruction is a mathematical one. Many computer theorists want to make computer programs as efficient as possible by minimizing the number of operations that the computer does.

There are also software engineers. Software engineers are often the ones who plan out a computer program, telling the programmers what to do. Software engineers want to make a product that is useful for the end consumer, and software engineering reflects the business side of computer science.

We also have the field of human-computer interaction, or HCI. HCI is the psychological part of computer science. HCI researchers study how humans use technology. HCI can be applied to make more user-friendly interfaces, or in the future, be used to make more advances virtual reality simulations. Some technological pioneers like Steve Jobs embraced HCI, which he simply calls “taste”, and used it to create compelling products that resonated with consumers[3].

There are also other fields like artificial intelligence, but the idea remains that computer science as a whole is what moves us forward, not just programming. Programming without theory would give us messy, limitedly-useful code. Programming without software engineering would give programmers no direction in business. Programming without the humanities or HCI would give us bland uninspired products. These things are what makes computing a science.

10 November 2013

File Sharing: Don’t use it for new releases. Thanks.

"Yeah … I wasn’t going to buy this game anyway, so I can keep my stolen version, right?"


Imagine that you are reading an article on the Internet, and the article refers to a work of fiction that you have never heard of. You may be curious enough to look up more information on it, and upon reading the synopsis of the work, it sounds interesting. It could be a video game, film, or book. There is, however, one problem; the work has been out for years, perhaps decades, and buying a new copy is simply not going to be possible.

You could buy it used. If it is rare, then copies might be sold at a high price. If it is a video game, it might only be available for consoles that you never owned. If it is a film, then you might not have the device that the film needs for playback.

The choice in this scenario is yours to make, but in cases like this, some people turn to file sharing, where they find the work somewhere on the Internet and download from there, for free. From a legal standpoint, sharing any copyrighted content is still inadvisable. However, copyright holders have varying tolerance on sharing older works.

For video games, there are emulators that allow console video games to be played on a computer. While emulators can be legally used to play purchased games that have been transferred (“dumped”) to the computer, some people use emulators to play games that they downloaded from file sharing sites. As some emulators have gained a strong following with little opposition from video game publishers, it remains to be seen whether this practice will remain unaffected.


Finally, there are some people who also download newer works from file sharing sites, and it is these kinds of people whom I can never support. I remember some games in particular, such as the Witcher 2[1] and Spore[2], being popular targets for downloading when they were new. I even remember reading a post on the Spore forums, where a person mentioned that he downloaded Spore illegally and asked if he could keep his game if he was never going to buy a copy anyway. Other posters (surprisingly) politely told him that what he had done is already illegal and that he could not keep his game. I never replied to the post myself, but I could not help but think that he was foolish to publicly admit to an illegal act.

3 November 2013

Data Structures

A mock bank form.




On a basic level, computer programs are made up of numbers and letters, with letters, or characters, simply being alternative representations of numbers. Since creating a program that would be large enough to be useful would be tedious if everything were coded using raw mathematical operations, object-oriented programming was created. In object-oriented programming, numbers, letters, words, and mathematical operations can be grouped together to create an object. Objects can even contain objects. There are also classes, which define what types of objects can exist. Since objects hold multiple bits of information together, they can sometimes be called “data structures”.

For one of my early university assignments, I acted in a hypothetical scenario where I had to program a bank's computer to store a list of customers and use that list to authorize ATM transactions. I created a class that defined a customer. A customer was an object that had a first name, a last name (both strings of letters), a bank ID number (an integer), and a monetary balance (also an integer, representing the amount in cents).

Since all customers have this standardized set of features, a computer can use my Customer class to quickly look at and authorize the customers with no technical problems. With speed being less of an issue, the bank's computers can look up a huge database of customers quickly. Since each customer's information is structured consistently, data structures are comparable to a form that a customer fills in by hand, which also provides the bankers a consistent set of information.

25 October 2013

Hacking: For Importation and for Fraud

Joystiq article on the PlayStation Network Outage




Last month, I had bought a Sony PlayStation 3. With the PlayStation 3 having been released in 2006, buying a seven-year-old video game system might make it seem that I am late to the party. The popular PlayStation 3 is the successor to the even more popular PlayStation and PlayStation 2 consoles, released in 1995 and 2000 respectively. As popular as the PlayStation series of consoles is, however, it has resultantly become a target for being exploited. Exploiting video game consoles such as the PlayStation could range from relatively benign, such as modifying the console to play video games from another country, to thievish, such as modifying the same console to play a disc with pirated content.

During the years 2005 and 2006, video game consoles rapidly shifted towards online gaming. Coinciding with the release of the PlayStation 3, Sony allowed users of the new system to play with other people, download new video games, and access other features via the PlayStation Network. It was only a matter of time, however, for exploiters to move on and try their efforts with the PlayStation Network. This time, however, their actions would lead to a breach in privacy of millions of users and gain the attention of governments. On 17 April 2011, an unidentified group of hackers breached the PlayStation Network, causing the service to shut down three days later on the 20th. Despite the Network being shut down for safety, the hackers obtained a large amount of passwords for the user accounts, allowing hackers to access personal information stored in the accounts. The passwords were cited as being hashed instead of encrypted[1], meaning that passwords were only moderately secured, which is likely why such passwords were obtained in the first place.

Such an event can be concerning for anyone who relies on their bank accounts to keep their money secured. While I was not part of the PlayStation Network during the incident, I can still relate, since one of my accounts for an unrelated video game was hacked. Fortunately for me, the account was long since inactive and no longer had my father's credit card information on it, so no real-life problems ever came from that incident.

13 October 2013

Open Source: Will I Become a Late Adopter?

Logos of SourceForge, Android, Linux, MediaWiki, and GIMP


I am a mainstream-ist.  I run Windows on my computer.  On it, I have a lot of proprietary software.  Other than video games, I have products made by big-name companies, such as Microsoft’s Visual Studio for C# and C++ programming and Office for writing documents and creating spread sheets and slideshows.  This is in contrast to some other dedicated computer users, who use lesser-known operating systems such as Linux and tend to have a large library of free-licensed software.  Graphic artists may, for example, use the free GIMP software to edit pictures instead of the expensive, professional-grade Adobe Photoshop.

Since I mainly use Windows, I am sometimes at a slight disadvantage when I need to program at a low level such as C or Assembly, as most compilers for low-level programming tend to, from my experience, be optimized for Unix-like environments.

Open source as a computing concept is not limited to downloadable software, however, as MediaWiki, known for powering major websites such as Wikipedia, is an example of open-source web software, which anyone from just about any computer can access without any actual software needing to be installed.

Open source platforms have varying levels of success.  Linux, while liked by many hardcore computer enthusiasts, is still niche in the overall market for PC operating systems, whereas the operating system Android, used mainly for mobile phones, rivals iOS, Apple’s operating system for its mobile devices.  Even then, while Android has success in the mobile phone market, it fared poorly when it was used to make the Ouya[1], an open-source video game system.  There are many possible reasons for why open source platforms seldom become mainstream, such as a lack of official advertising or a lack of brand recognition.

Supporters of open source often cite how such projects can be constantly contributed towards and improved by both old and new programmers, potentially lenthening the lifecycle of the software.  The fact that the Internet allows projects of any size to be shared easily helps matters,  with websites such as SourceForge being a popular repository for open-source projects.


Finally, two years ago, one of my roommates, who was a senior software engineering major, said to me that he did not believe in software that one had to pay for.  Believing in a capitalistic society where everyone has the right to demand fair, but not necessarily equal, compensation for their labour and talent, I was not sure if I could wholeheartedly agree with my roommate.  After all, computer programming is a highly-sought talent, so why would some programmers choose not to monetize their skill?  For a large-scale software project to reach its full potential, it needs to be backed by a large amount of research, and research costs time and human resources.  Because of this, it remains to be seen how sustainable open-source development can be for those types of projects.

6 October 2013

AGILE: My First Experience Developing in a Group

Screenshot of Doodle


Agile is a form a software development where advances in developing software are done in small, manageable amounts.  When development is done in small iterations, it can be easier to make changes to the software and develop it in a certain direction as the developers, company, or clients see fit.  This is in contrast to traditional methods of software development, where the design of the software is planned for in advance, and deadlines are pre-determined, leaving little room for deviation.  Although traditional software allows a product to be delivered reliably on time, the product may also lack features that were initially not thought of until later in the development cycle.  While the quality of a traditionally-developed software product will likely still be adequate, its features may not be as fleshed out as if it were developed with Agile.

This leads us to Scrum, a form of Agile development that my classmates and I are using for a university class.  Up until this point, all of my software projects have been done alone recreationally—I have almost never developed software in a group, and I have never developed software to provide a specific purpose.  Because of this, my Agile software project has for the first time allowed me to see how professional software development works, and more specifically, how quickly plans during the development process can change.

In my class’s Scrum, we meet with a client who tells us what he wants for a website, and sometimes during a meeting, he will tell us new information that we developers were previously unaware of, which may requires our groups to redistribute tasks.  In addition, my Scrum group uses various websites such to decide on layouts, designs, and features for the website that we are developing on.  From what I have seen, software development processes can take on various forms, and it this trait that allows creative solutions to be achieved via software.

18 September 2013

LinkedIn and Branding: What I had been Missing

Microsoft page on LinkedIn


My experiences with LinkedIn are few.  For starters, I knew nothing about LinkedIn until one of my instructors introduced me to the site during a summer university programme this year.  Just recently, I had found out that LinkedIn was launched all the way back in 2003.  I was surprised to not have learnt about this website until 10 years after its establishment.  Perhaps it is a sign that I am still very much new to the job market.

For me, LinkedIn may be the start of something new.  Before I was introduced to LinkedIn this summer, my plan for my adult life was less certain; I knew that I wanted to go to graduate school to get some solid research experience before scouting out for a job in the private sector that could complement my interests, but I was not quite sure how I would go about finding a lucrative job.  In addition, I would have had to be even more careful about choosing a job than my peers, since studying full-time at a university for an advanced degree would give me even fewer years to get job experience, and I did not want to squander my future education.

Most importantly, however, I knew that I could not keep my hand held by my parents as an adult, and that I needed a way to find employment on my own.  All around me at my university, I see ambitious young men and women talking about how they already have internships or are almost ready to start their own company.  While I have my own aspirations, I am still not completely sure how to fulfil them, and it is that difference which makes me feel intimidated.

This is where LinkedIn can help me.  As a website, LinkedIn feels familiar to computer geeks like me, and by following various companies, I can keep track of what they post.  By reading these companies’ posts, I can get a better idea of their work principles and know what to expect if I pursue employment there.  If a company is active on LinkedIn and posts on the website regularly, then I feel more comfortable and open to the company.  Posting on LinkedIn may seem minor compared to the other responsibilities that the company has, but if my experience is anything to go by, doing so can increase the desirability to prospective employees.

13 September 2013

QR Codes: Often a Gimmick in Advertisements

Scan here to unlock exclusive behind-the scenes video of tonight's episode.


Shortly after my parents and I moved to the suburbs in the East San Francisco Bay Area in 1997, the popularity of the World Wide Web was picking up speed.  It was not long before my father signed up for the Internet service provider America Online.  My father and I were intrigued with the amount of information that we could conveniently access.  Neither of us cared that AOL used the telephone lines, blocking us from sending or receiving phone calls; the amount of new things that we could do made it worthwhile.  (Even though my mother, who did not use the computer at all, let alone the Web, felt that the blocking of phone calls was troublesome.)

Followed was the popularity of the term “.com” in advertisements.  In 1997, having a ”.com” seemed like a big deal, as websites with their own domain names were associated almost exclusively with major businesses who had the resources and the infrastructure to risk building on to a new and rapidly-growing, but uncertain, platform. 

Now in 2013, just about anything can have a domain name, from upcoming films which are soon to be forgotten, to small family businesses.  This brings us to QR codes—they are functionally similar to URLs or Web addresses, but take less room on, for example, a poster.  In a sense, QR codes combine the compactness and machine readability of UPCs (bar codes) and the flexibility of URLs.

The advantage of machine readability is particularly important, since it leads to my overall opinion of QR codes.  QR codes that are meant to be scanned on humans are often a gimmick; the people who can scan these QR codes are people with smartphones, who are also people who can simply type a URL and go to the relevant website directly (although longer URLs can justifiably be replaced with a QR code).  QR codes can be overused to a fault [1] [2].  As I said before, QR codes in advertisements and magazines are gimmicky and could easily be replaced with URLs, and as a person who is mindful of where to go online, I often do not want to blindly go to a webpage or have my phone do anything that I would not expect it to do, so I have no interest in scanning them.  Since advertisements have been advertising their Web content long before QR codes even entered popular use, QR codes may quickly lose their novelty, as they do not offer nearly as much as domain names did in 1997.

Most productive uses of QR codes are instead in East Asia, where QR codes are being used for transit tickets, identification documents, and visas [3].  If the western world does not adopt the practical uses of QR codes that East Asia has, then it may only be a matter of time before the general public writes QR codes off as a fad.

6 September 2013

Social Networking Security or Lack of When Promoting Your Brand

The Facebook page petitioning for a changed Mass Effect 3 ending.


Branding is an important part in business.  In short, it maintains a company’s reputation.  Many of us who regularly eat fast food will drink Coca-Cola, for example, because since its introduction in 1886, it has been a consistently good, well-known, and refreshing source of beverage to this day, 1985’s New Coke notwithstanding.  Coke itself has become a genericized trademark.

Online social media platforms, including but not limited to Facebook, are the newest, the most transparent, and perhaps the most convenient way of discussing a company or its brand.  While products such as Coca-Cola are marketed towards a general consumer base, the most scrutiny from the online social network community may instead be on hobbies that computer geeks and other Internet dwellers like the most—video games.

When discontent, the video gaming community has been one of the most vocal critics of just about anything.  One controversy in the past few months was the introduction of the Xbox One, a video game system serving as the successor to the Xbox 360.  The Xbox One was to include restrictive features not present in the Xbox 360, such as needing to connect to the Internet every 24 hours via the Xbox One for verification and blocking pre-owned games without paying an additional fee.  A great amount of backlash from the community ensued [1], pressuring Microsoft, the developer of the Xbox One, to remove the Xbox One’s restrictive policies before the console was even released [2].

Another point of controversy was for a video game itself.  Mass Effect 3 is a science fiction action video game developed by Bioware and released March 2012.  The subject of the controversy itself was trivial; the ending of Mass Effect 3 was thought to be poorly written.  Despite the fact that anticlimactic endings are common in some works of fiction, the controversy was once again widespread in the community [3], with a Facebook page even dedicated to petitioning for a new ending to the game [4].  The response eventually pressured Bioware to release a recut ending for free the following summer, a rare occurrence for video games [5].

This vocality, however, may very well be a blessing in disguise.  If Microsoft had not, for instance, rescinded its restrictions on the Xbox One, then many people who would have bought the Xbox One when it would come out would have been surprised by the Xbox One’s user unfriendliness and complained after it would have been too late for Microsoft to change the Xbox One at all.  Furthermore, the same community that criticizes its industry can also come to its defence.  News reports can at times be inaccurate, with those on video games being no exception.  Preceding Mass Effect 3, the original Mass Effect was criticized by feminist author Cooper Lawrence on Fox News for its sexual content, exaggerating the actual amount of explicitness in Mass Effect.  In response, a large amount of negative reviews flooded the Amazon page for Lawrence’s then-latest book, The Cult of Perfection [6].

While the general audience for various products is not quite as fanatical as video game audiences, the lesson from these incidents is still relevant to other companies who open up themselves on the social network.

28 August 2013

Welcome



Welcome, SJSU class of Fall 2013!

I, like many aspiring computer scientists, developed an interest in the subject from video games.  Video games are the ultimate form of escapism, having the visual and audio splendour of film, the enduring length of novels, and the engaging interactivity of other types of games.  Having been immersed in the experiences created by video games since 1996, I aspire to work as a computer scientist to create new experiences using computers, in a field that is sometimes called "human-computer interaction".

To this day, virtual reality is in limited use.  Even with our modern technology, we still have not yet found a reliable way to give humans a proper, intuitive way to interact in a virtual environment, aside from simple goggles and the occasional motion-sensitive gloves.  I feel that when done right, technology can connect to humans not only on a mechanical level, but on an emotional level as well.

Since my life is almost entirely centred around computers already, I look forward to posting new articles every week on how technology can affect humans and the world.  I like exploring the broader, less technical side of technology, and hopefully at least some of my future articles can pique your interest.