There are several parties to such an execution of code.
Sometimes we don't think of them as programs, but they are. When you watch video from YouTube, you are running a little flash program that connects back to the YouTube server and streams video from it. We'll get back to that connection later.
Programs that run automatically inside a web page have constraints placed upon them that conventional desktop applications do not. Anyone can put flash in their web page and have it run on computers all over the world, so the code should still be safe to run on your PC if the web page was created by maladjusted teenage hackers and/or the Russian Mafia. The vendor must create the platform so that it is safe on the client even when the server is hostile.
In general, these platforms make the code run inside a "sandbox", which provides severely gated access to the underlying PC's resources. The client-side code can display graphics on the screen, but only within its own window's confines, otherwise it could create fake dialoges and trick the user into entering password or credit card details. It can accept user input, but cannot monitor all keystrokes. It can store settings and data in carefully isolated parts of the file system, but they cannot list, read or write the other files on your computer. And they can connect back to the server from whence they came for more data, but they cannot make connections to other servers. That would allow it to use the client computer as part of Distributed denial of service attack, or a Cross Site Scripting exploit.
The party of the fourth part
But what if you do want to access the potential fourth party to this set-up, the "other servers"? There are lots of cases where this could be useful. For instance, the Twhirl Twitter client is downloaded from http://www.twhirl.org/ but works by connecting to Twitter and other websites.
The first solution was to do it in two hops: the client connects to the server that it came from, which connects onwards to the other server, gets a response and forwards it to the client. The problems with this are that it will be slower and more complicated; and that the more clients are running the more work the server has to do, so it will not scale up.
The second solution used is to carefully relax the restrictions, and allow servers to opt in to allowing Flash and Silverlight clients to connect to them. The server has a client access policy that specifies if clients can connect.
Adobe pioneered this approach. In order to allow clients to connect to
www.mysite.com, it looks for a file called
Here's a sample that allows access from all comers:
<allow-http-request-headers-from domain="*" headers="*"/>
Microsoft's Silverlight shamelessly adopts the same policy - and the same file. Silverlight will first look for
www.mysite.com/clientacessspolicy.xml, which is Microsoft's way of doing the same thing, but failing that it will look for
Here's a sample
It's quite similar, only with different syntax.
<resource path="/" include-subpaths="true"/>
Is a client access policy a good idea?
I have not yet made up my mind if the concept of client access policies is on the whole a good thing. It does not guard against all problems, but it probably does plug one particular hole, at the cost of a bit of inconvenience. It's quite restrictive because you have to opt in.
However, unless you have influence over Adobe and Microsoft and a better idea in mind, we're stuck with it. So it is very much a good idea for your site to be aware of the idea of client access policy, and either have one, or deliberately not have one.
So I have to put code on my server to let your client work?
It's not code, it's a configuration file. It configures who is allowed to go where. You probably already have a file called robots.txt that fills a similar role. The difference is that robots.txt allows you to opt out of web crawlers, but client access is more restrictive - you have to opt in.
One file satisfies all clients. You can have two files if you want to treat Silverlight differently from Adobe clients. Any potential future similar languages will probably also respect
crossdomain.xml simply because it's in place now.
Other than that: yes, yes you do. These clients can't work without it, by design.
Sites have this?
Yes they do. Look at http://www.twitter.com/crossdomain.xml, http://maps.google.com/crossdomain.xml or http://api.flickr.com/crossdomain.xml
How is opt-in enforced? How do you get client code to respect this?
Opt in is enforced by the platform upon which the client-side code runs. It's possible that client-side code will try to subvert or work around the runtime upon which it runs, but now we're in the realm of patchable implementation bugs, not fundamental design flaws.
When client code tries to connect to a site that doesn't have a client access policy, it just gets a security error in response. Yes, I've tried this in Silverlight.
Why is the onus on the other server to supply this file?
If the client access policy was served with the Flash or Silverlight content, then malicious content could be accompanied by a malicious access policy. You get to control the gates to your own site. But if I manage to hack into a site and inject flash code that than "phones home" to my server, I can then set up my server's client access policy to take the call. It's not perfect, but it does plug some holes.
Does Everything2 have a client access policy?
No. But Everything2 is the kind of site that should - there are web and desktop clients that connect to E2's html pages and xml tickers, and I don't think that Flash and Silverlight clients should be excluded from that party. The issue has been raised. Watch this space.
Microsoft Developer Network, "Making a Service Available Across Domain Boundaries", http://msdn.microsoft.com/en-us/library/cc197955(VS.95).aspx
Lucas Adamski, "Cross-domain policy file usage recommendations for Flash Player" http://www.adobe.com/devnet/flashplayer/articles/cross_domain_policy.html