Hi folks;
I'm just starting work on quite a large Access project, I've been called in to try and chase a few gremlins out of a database that's bcome pretty big... 25000 records, a bunch of queries (some multilayered) and lots of macros. The frontend is installed on 12 local machines and is often used by 12 users concurrently.
The problem is that the backend is stored on a Windows shared drive in Copenhagen and the network is a little strained; bandwidth and latency are quite thin on the ground and some of these queries are taking up to five minutes to run. These problems combined with the multi-user nature of the DB means that records are getting lost and corrupted, seemingly at random but at a rate of 5 or 6 records per user per day. Too much! Maybe delays in the record-locking calls between the front end and back end are to blame? Anybody have any ideas?
Thanks!
Hen
I'm just starting work on quite a large Access project, I've been called in to try and chase a few gremlins out of a database that's bcome pretty big... 25000 records, a bunch of queries (some multilayered) and lots of macros. The frontend is installed on 12 local machines and is often used by 12 users concurrently.
The problem is that the backend is stored on a Windows shared drive in Copenhagen and the network is a little strained; bandwidth and latency are quite thin on the ground and some of these queries are taking up to five minutes to run. These problems combined with the multi-user nature of the DB means that records are getting lost and corrupted, seemingly at random but at a rate of 5 or 6 records per user per day. Too much! Maybe delays in the record-locking calls between the front end and back end are to blame? Anybody have any ideas?
Thanks!
Hen
Comment