'Lo, kids! I've got another algorithm I want to idiot check. This one's a way of ensuring that a bunch of data passed in a form can't be tampered with easily.
Ok. Say you're pulling some bunch of data in response to a search from some external source. Some of this data includes things like prices, discounts, and the like. It's not important that this information is invisible to a potential (malicious or otherwise) user, but it's important that it can't be tampered with. Because there's no easy way for the receiving page to validate the data, we need to mark it somehow.
A checksum is out of the question: too easy. However, the software is web-based, so we could store some kind of a hidden key in a session variable. We could then concatenate the contents of each one of the form fields that must not be alterable, and append the hidden key onto the start or end. We could then hash the resulting string to generate a fingerprint to be passed with the form. Checking the code would mean reassembling the string from the form fields and the key, hashing the result, and comparing it to the fingerprint.
The hidden key would have to be something fairly random, like a fairly strong random number generator, or even a UUID. It's not sufficient to use one single static key for the whole application, as this could be too easily found out. Nor is it ideal to have a periodically (regenerated after the application times out from lack of use) refreshed one. Though the latter might suffice, it's still potentially shared between a large number of hosts, and could be cracked by somebody determined enough.
So a session is the only way. This is tied to one client, and even if some kind of attack is made to try to decipher the key, throttling could be put in place to make sure they can't do much, and if they do they'll be noticed.
So, does anybody see any flaws in this? It's a simple enough (and frankly, fairly obvious) scheme. I'd be unsurprised if I'm not the first person to come up with this.
I'd appreciate any feedback.