Thursday, March 8, 2012
mogielfs in tcl
ok I have a few experiences ater having made it to dicts in the tutorial
[00:09] adn soem questions
[00:09] == herdingcat [huli@nat/redhat/x-tzcqaaxqqymssuft] has quit [Remote host closed the connection]
[00:09] arrays seem awkward to use yet are the main data structure in tcl?
[00:10] or do they get easier with practice?
[00:10] and are dicts mostly better to use now?
[00:10] I never learned c so I am not familiar with procedural looping algorithms although I did discover the finite state machine aka flag setting method
[00:11] seems with that and the split on \n to reuce files
[00:11] and regex when needed
[00:11] once can then buuld up programs that respond to almsot anything
[00:11] the whole upvar and mofiyingthing outside the current loop i have to still understand more about
[00:12] I had perl training at work and named blocks seems to be perls way
[00:12] overall I find tcl amazingly productive
[00:12] so much so that I dont really use bash for scripting anymore
[00:12] now what can in tcl replace what absh does with ssh box "comecommnd" aka runnig a remote procedure?
[00:13] is that what channels are for?
[00:18] I think something that emualtes mogileFS but using tcl would be cool.
[00:19] the hard part is the program that accepts requests and then find the file on the least laoded backend server
[00:19] but maybe wub or wibble could handle that
[00:19] with a little polling over channels or something
[00:20] the idea is to have 20 nodes each with 6 disks cheap, raid 0 softraid, and have tcl copy file1 N times among the 20 nodes according to policy
[00:20] then if u lose 1 or 2 nodes who cares
[00:20] and if you do
[00:20] the program sorts it out by copying to up nodes so that the file is always on say 4 nodes at least
[00:20] could be 10 for an important file
[00:21] then any incoming request goes to the most reposnsiv node
[00:21] to start with could be dumb round robin but ideally woudl be smarter
[00:21] this would get rid of need for crappy block level solutions that when they break wreak havok
[00:21] and are expensive
[00:22] and let you add hetergenoues nodes
[00:22] as long as they can run tcl and have space
[00:22] no more figgin nfs
[00:22] then you migth have to have the wub have soem standbys on say 3 other nodes or something
[00:22] to aovid single failurepoint
[00:22] but hek http si so fast
[00:23] and NFS so carppy
[00:23] and then I guess for security your could do whatever wub or wibble use for auth
[00:23] although by time front end web app processes and calls for static files
[00:23] backend could eb firewlaled ot all hell
[00:23] and non of it be near the edge
[00:23] of the entwork