This term has become very common since the boom of web application development in recent years. As documented in the scalability node, a system is said to have good scalability or "scale well" if it can perform close to twice as many operations (such as running a search on E2) with twice as much hardware behind it.
In general terms, scaling can be achieved in two dimensions, horizontal and vertical. Scaling an application horizontally basically means buying more boxen. If you run a single-CPU webserver which can serve 30 people at once before it runs out of processor power and you need to make it serve 60, scaling horizontally you would buy another box and employ some software or hardware solution to allow the two machines to share the work. Scaling vertically, you would whack another processor into the existing box instead.
Scaling horizontally has a number of advantages. In most cases it is easier to achieve very good horizontal scalability than very good vertical scalability. Also, a horizontally scaled system has redundancy built in, so one of your servers can go down and, assuming that your system is configured well, the other server(s) will carry on and at least provide some reduced level of service to the clients trying to connect rather than giving them a blank screen and a blank expression.
On the downside, however, horizontal scaling can add complexity to the system. Fortunately, most server operating systems now have support for horizontal scaling built-in to reduce the technological burden. Even Windows has it now as of Windows 2000 Advanced Server (by using its Network Load Balancing Service). Hardware solutions for load balancing webservers make life even easier and make Cisco rich too, so everyone's a winner. There is more bad news for the horizontal approach though; the cost may be significantly higher than scaling vertically due to a number of factors, such as:
Higher application development costs to make your application work with load balancing across multiple servers.
Higher hardware costs - another server versus another processor (although in reality it's rarely that simple).
More machines generally means more maintenance time and time is money folks.
I have deliberately side-stepped the issues surrounding what scales well and what doesn't, be it operating systems, RDBMSs or web application server software. Zealotry aside, the general opinion of the IT industry seems to be that all things *nix and J2EE/Java have scalability in spades. Some cocky young upstarts from Redmond have recently started to argue the toss on this point, but they do tend to talk a lot of crap, so only experience will prove it either way.