One of the original points about electronic music is that you can control every aspect of the sound. To go back to the guitar example, you can't control the envelope of a guitar note - a very loud attack when the note is plucked, and a variable decay that is relatively quick. The tone of the instrument is hardwired to the specific guitar- you can't change the kind of wood, or pickups in the guitar.

There are a number of ways to create sound that people call electronic music. Subtractive synthesis, fm synthesis and sampling are the most common right now, and just with those three almost every aspect of sound can be manipulated, to a much higher degree than with a 'traditional' instrument.

What is really going on here is people looking for a better interface to control electronic instruments. It is extremely common to use laptops as live instruments right now, but the problem is interfacing in an intuitive way. Sure, being able to change the pitch of the sound you are creating is cool, but not if it takes 15 seconds to do it, and you can't accurately find the note you are looking for. The issue is creating a musical instrument that feels like you have control over it similar to a traditional instrument.

Some software people have been using to get this kind of control are Reaktor and Max/Msp. These are both essentially music programming languages, with Reaktor being more modular, and easier to use. Another program I have found that deals with these issues and is ready-to-use is Spongefork. It is designed to be 'played' with a standard qwerty keyboard and mouse.

The other thing to think about is how long it takes to learn to play an instrument. People spend years getting used to playing guitar, or keyboards. You should expect to put as much time into learning about electronic music creation before feeling like it doesn't have the same level of expression as a guitar.