I had recently decided to undertake an Obj-C project which would require a database backend. While I was thinking about file formats (before deciding not to reinvent the wheel, and while I thought I might implement the database part myself), I thought I'd dispense with file and record locking by having a single data manager which was messaged by (and messaged) an arbitrary number of client-interface-managers. Since only one object did read/write, file locking was unnecessary. I was excited to see Suneido use the same idea (no file/record locking, because a single database manager had exclusive read/write to files); not having studied databases, I thought it was interesting I'd come up with the same solution to simplify the ssytem. (ie, I was proud of myself for guessing a solution that turned out to be in existence)
BUT: what happens when the single database manager gets
many requests for data and instructions to write data that
one manager running on one machine can't possibly withstand
the load? If you scale this to enough clients, it will
eventually happen regardless how small and quick the manager
is. And if the manager is doing complicated search logic,
these requests could potentially be so burdensome that even a
relatively small number would kill it. Some possibilities
that come to mind are:
(a) have several managers acting as peers to service requests filtered through a load-balancer; each manager has equal access to a shared drivespace ... but now they need to either
(i) use file locking not to overwrite one another with simultaneous write requests,
(ii) message one another to be sure nobody else is working on a simultaneous request that interferes, which could be a problem by causing delays while the managers message one another about individual files and waiting for the results instead of just getting the results, or
(iii) the load balancer keeps all requests for any particular record going to the same database manager ... but if the clients don't know which records they're looking for until the search logic is processed, which is likely in lots of database uses, this can't be done without additional access by the load-balancer to the data, which makes the "database managers" irrelevant since the work is being done by the load-balancer .... or (hopefully not)
(b) it doesn't scale beyond the capabilities of the box containing the database manager, so either use a big, butch box or a non-intense request load.
Have I missed something? Please write. I'm trying to deal with the same problem myself, and am interested in either (a) using your solution as a backend for handling my problem, or (b) using your solution to enlighten my search for my solution.
I'm interested in creating a piece of open-source-ware which is capable of handling enterprise-level demands, so these concerns are fairly serious to me. Personally I will not soon need anything requiring load balancing of requests, but I think this is better built in than bolted on (like security). Also: Suneido's site offers binaries built for operating systems which offer apps the Win32 API ... does Suneido require this API? (I haven't looked at the code.) Assuming the guts of Suneido are mostly code for handling Suneido's OO language and its search info, there's little obvious reason the Win32 API need be invoked ... although the code for controlling MS "Windows" UI elements certainly would have to be.
Will Suneido be portable to non Win32 environments? Will it turn up on *nix boxes where I will want to use it? And mostly: How do you answer extreme request loads? (ie, assuming you put the database manager module on a dedicated box and this is not enough to satisfy the requests?) Is this a concern for Suneido?
Edited Later: Andrew McKinlay posted this in Suneido's forums for discussion, and answered there that scalability was (at least temporarily) a sacrifice made for performance and for expediance (to get a product working, so people don't get tired cosing forever without the payoff of seeing the code doing work) http://www.suneido.com