Quote:
Originally posted by quant
the lag is there because UR is doing indexing and other things when appending data (you can opt out from indexing imported websites, I suppose then the importing will be faster) ...
It comes all from searching and sorting theory:
1.You either put everything just as it is (maybe at the end of database), then appending is lightweight and very fast, but searching will be slow, cause you need to go through the whole database to find all occurrences of search string.
2. On the other hand, if you insert into whatever structure the underlying database in a way that you keep your data sorted (indexed, ...), then appending will be slower operation, but searching lighting fast ...
|
The point is not "why there is lag". I have asked about the slowness via email and gotten the technical reasons why a page import takes so long even on a very high performance system.
The point is "there is no reason why UR cannot do this work in the background instead of locking up two applications and making the user wait, sometimes for a long time."
My note was a simple suggestion on a different way to think about the issue. In this day and age of quad-core processors, a single-threaded blocking approach that stops the user's workflow dead cold does not make any sense at all.
At the database level (SQLite), there is nothing that needs to be done in the import manager until the import takes place. All that is needed in the UI is a placeholder item. When the import manager gets to the particular import job associated with that item, then the appropriate database entries can be made, provided the import succeeds. If there is an error, the placeholder item can have its status updated with an error message and then the user can retry the import manually (which would put the import at the start of the queue) using some sort of "Try again" mechanism. A smarter queue mechanism would update the placeholder item and then retry the import after an appropriate delay.
I hope a future version of UR will be more intelligent about how to do long running operations in the background. It takes a little more development work to setup the infrastructure to handle background jobs, but there is a very large payoff. All sorts of long-running tasks can be pushed into the background and the user can keep working vs. coming to a complete total application lock standstill. And that is the ultimate goal: the user must be able to maintain an efficient and timely workflow with the application.