View Single Post
  #21  
Old 11-30-2012, 10:23 PM
schferk schferk is online now
Registered User
 
Join Date: 11-02-2010
Posts: 151
armsys asks me in another thread : "Have you actually ever exercised due diligence in experimenting with products before publishing your magnificent manifestos here? For example, for RM, have you actually tested your RM dashboard(s) with topics/tasks embedded in 30 or more respective mission-critical mindmaps?"

I did not even install the trial, and I never pretended to have done so, but I'm VERY interested in replicating that part of its functionality that'd be sensible to have available in order to overcome the problem that no MM prog has got multi-map clones - as has no outlining prog either, btw.

And from what I see, it's perfectly possible to have such rather simple macros, for rather simple functionality, i.e. you might have 50 maps, where many items are "ToDo's", of different kinds. So what to do, in this simple scenario? You work on your topic maps, but you mark items that should be "shuffled up" into the different dashboards - 1 dashboard for 1 kind of "ToDo's", to begin with; also, combinations could be processed, later, with just a little bit additional scripting.

You "mark" them, I say. Well, this has to be done, and undone, fast. MM provides, among many more (but not so simply accessible) ways of marking, two ranges of symbol markers, "priority" 1 to 9, by shift-control-1...9 (the '^0 deleting a marker 1-8 that's there) - but you could use 1,2,3 for priority, and 4 to 9 for any other classification, e.g. Then, there are many more symbol markers that ain't prefigured, but that you assign yourself to ^1...^9 (the ^0 deleting, again) before using them - that gives you 18 such markers for a start, and if that's not enough, you then would add text markers, background colors, or other.

Sideline: As I said before, it's all about macro-driven, external "up-shuffling", i.e. copying these items into your dashboards that will be re-built again and again; in the web, people say that they wait for 45 min. for this being done - which means to me, it's all nothing but a technically primitive (even when highly-complicated, from the programmer's pov) work-around, so I doubt that such a system is really ready to fulfill the task armsys would like it to fulfill; I think real PM should not rely on outdated data the updating of which demands 45 min., with the system closed down for any use during this wait while rebuilding "everything further up" from scratch.

But I seriously think that before we get real clones - updated in real-time -, for less demanding, not too complicated 1-person uses, a macro system integrating multiple MM maps into dashboards could be viable, by lack of a better solution here; as explained, the use of an external applic building up, trans-maps, lists of such marked items, thus aggregating them into a list, by search, is an alternative - and also not real-time - solution, which does not deliver dashboards, but forces you to do the same searches for these (ToDo- and other) markers, again and again - that's why my preference goes to a very simplified and self-written RM system.

Side-line: Whilst for MM, you need an external add-in in order to build up a tree comprising the respective trees of several / multiple maps, it's VM that has got such a trans-map tree view (visible all the time, by option) in-built - but it does NOT seem to me that it will give you item selections there, by markers (will have to check this since that would make it a ready-for-use system, without dashboards, but with a little scripting...

Back to MM: What I've in mind, then, is simply shuffling up of marked items, by macro, into the respective dashboard, and THE MOMENT I MARK THEM. I.e. I wouldn't do '^4, e.g., for a given item within any of my working maps, but would trigger a macro that would do the '^4, but also copy the item into the "dashboard 4" map - this is perfectly possible under condition that you always load all the dashboard maps, so that they are available in memory, for receiving such operations.

On the other hand, whenever I delete a marker (!) of an item in any map, a macro would delete the copy (!) of that item from the respective dashboard map. And, whenever I delete an item (!) from any of the dashboards, a macro would delete the respective marker (!) of the original of that item in the respective map

- well, it's possible that such two-way processing is too complicated or cannot be realized, from lack of a command which would be necessary for this macro to work, within the macro language that GyroQ makes available. Then, I would indeed be in the same situation as the RM users are who need to let build RM up the dashboards, from scratch, once or twice a day: of course, a script to build these dashboards up in a row

(and not bit by bit, in real time, whenever you change a marker or a marked item

(in fact, another macro would be to change the text of the item in the dashboard, when you change the text of a marked item in a work-map; the other way round, again, there might be probs),

by checking every item, for markers, in a non-dashboard map, and then processing it accordingly, doesn't present any technical problem, but of course is awkward by the waiting time it imposes upon the user.

Sideline: Updating dashboard items from changes in "originals" is easy, since the marker of the "original" would identify the respective dashboard map (abstraction being done here from any possible combis, of course, but that could be processed sequentially, one marker a time); the other way round, there could be a prob since there isn't any marker to identify the "original" 's map (= where to look for the item to be updated "downwards", and so, it would be possible that downward-updating is impossible or asks for too much processing power (i.e. e.g. by searching all maps for that item, by content).

You see here that from a technical pov, it's really pc-stone-age processing of data, so let's hope there will be trans-map clones some day: If you have to do such amounts of external scripting for simple commands (remember, if there were clones, any of these items anywhere could be identified / addressed by unique identifier numbers, to begin with), there simply isn't enough internal code that has been done, and that's not good. What I mean is, from a technical pov, there's a tremendous difference between internal functionality and heavy external scripting: 45 min. wait time for updating your data, e.g.
Reply With Quote