Tuesday, March 31, 2009

misc cpu

http://www.dakeng.com/misc.html




ENGINEERING


Home of TurboCNC, the best open-source CNC control around...


web www.dakeng.com

You are here:
Homepage -> Software -> MISC Sim Latest release of TurboCNC is v4.01 build 050312,
released March 13


Minimal Instruction Set Processor (F-4 MISC)


Hardware Software

Articles HOME FAQ's

Manuals User
Forum

About
DAK Ordering Contact
DAK

Sitemap Links



Update: Bill Langdon, a Senior Research Fellow at Essex University in the UK, has recently published a paper in which he analyzes the F-4 MISC processor (PDF), along with a similar one of his invention which he calls the T7.

CONTENTS

1.

Overview
2.

What is MISC?
3.

Instruction set and specs
4.

Code examples
5.

Simulator
6.

Applications

1. Overview

This article presents a design for a high performance minimal instruction set processor with code examples and a simulator written in VB6.

2. What is MISC?

MISC is an acronym for Minimal Instruction Set CPU, and it refers a particular philosophy of microprocessor design along a spectrum of internal complexity.

A MISC processor has a minimal number of instructions to carry out computing tasks. Where a typical CISC (complex instruction set CPU) like the Pentium has well over 200 instructions, and a modern RISC (reduced instruction set CPU) may have 30 to 70, a MISC will have far fewer. This team, for example, is using 20 instructions as a base for their MISC design. A good summary of RISC vs CISC philosophy is here.

The processor described in this article uses just 4 instructions. A small number of instructions means a simple, fast chip that is easy to produce and consumes less power than its complex cousins.

3. Instruction Set and specs

Although the specific implementation is not important, a 16 bit address and data word size is assumed here to make the decode stage as horizontal as possible and the fetch cycle perfectly symmetric. Each opcode is one bit of 16 possible in the instruction word. It would be possible to build this processor in any word size down to 4 bits, or up to 64 and beyond depending on the application. The only requirement is that the address size be equal to the data size and that the program and data space are shared (Von Neumann architecture).

The instruction set for the F-4 (fast 4), a proof-of-concept minimal instruction set CPU, is listed below. Each instruction has several addressing modes. Note that the mnemonics are borrowed from the venerable Motorola SY6502, used in the old 8-bit Nintendo, among other machines. There is only one "A" register or accumulator, and a program counter.
F-4 MISC 16 bit instruction set
instruction opcode operand operation clocks
ADDi imm 00 01 16 bit value imm+(A) --> A 3
ADDm addr 00 02 16 bit address (addr)+(A) --> A 4
ADDpc 00 04 null operand PC+(A) --> A 3
BVS addr 00 08 16 bit address (addr) --> PC if =1 3
LDAi imm 00 10 16 bit value imm --> A 3
LDAm addr 00 20 16 bit address (addr) --> A 3
LDApc 00 40 null operand PC --> A 3
STAm addr 00 80 16 bit address A --> (addr) 3
STApc PC 01 00 null operand A --> PC 3

The four instructions can be summarized as Load, Store, Add, and Branch if Overflow Set. Each memory operation is assumed to take one clock, and ALU operations take one clock also. ALU (Arithmetic and Logic Unit) is something of a misnomer here, since this chip can only add. For example, "ADD addr" takes four clocks, two to fetch the instruction and operand, one to read the memory, and one more to do the addition.

I estimate that about 1400 transistors would be needed to build this for a 16 bit implementation, which gives a 128k (216 words) space for programs and data. With so few transistors, a very conservative performance estimate would be on the order of 50 MIPS if the memory is onboard the chip.

4. Code examples

Four instructions isn't much. Here are some examples of how some instructions common to other processors might be implemented on the F-4.

A JMP (jump) instruction can be done by storing the destination to the program counter::

LDAi (addr)
STApc

High level languages require a JSR or CALL (jump to subroutine) function at the machine level. A simple approach is to store the return address in a dedicated location inside the function space. Note that this example is for a non re-entrant function. In general this would be done with a stack operation.

LDApc
ADDi 6
STAm return
LDAm target
STApc (executes a JMP)
... (execution resumes here)
...
(target)
LDAm return
STApc (returns)

SHL (arithmetic shift left) is a quick addition of a number to itself. The overflow flag is set if the bit shifted out is 1, clear if it's 0. This is the basic operation to build more complex functions with.

LDAm (addr)
ADDm (addr)
STAm (addr)

Here's how boolean NOT would be performed. This is a bit lengthy, so I'll use a 4-bit example:

4 bit NOT

(op) is the input
(res) is the output

LDAi $0
STAm (res)
LDAm (op)
ADDm (op)
BVS skip
LDAm (res)
ADDi $8
STAm (res)
skip LDAm (op)
ADDi (op)
BVS skip1
LDAm (res)
ADDi $4
STAm (res)
skip1 LDAm (op)
ADDi (op)
BVS skip2
LDAm (res)
ADDi $2
STAm (res)
skip2 LDAm (op)
ADDi (op)
BVS done
LDAm (res)
ADDi $1
STAm (res)
done ...

That took 26 instructions in 4 bit code, and it would be 98 instructions for 16 bit code. It works by shifting each bit sequentially with pseudo-SHL and setting the appropriate bit in the output word accordingly. AND, EOR, XOR, XNOR, and SUB/SBC work similarly, but in a more complex way since two operands are involved with the concomitant branching layers.

It's a misuse of the MISC concept to try to directly duplicate each instruction for a more sophisticated processor however. The purpose in writing these code examples is to demonstrate that the F-4 processor is Turing complete, as it is essentially a canonical register machine. That is to say, any calculation or algorithm can be implemented given unlimited memory.

5. Simulator

To model the F-4 in action, I've developed a simple VB program that reads the memory image to run and then simulates the execution.

The memory is capped at 64 kilowords in a 16 bit operating mode (word addresses 0 thru $FFFF hex). No I/O is modeled, and only one MISC is running in a single data space. This is for developing and testing code fragments. Here's a screenshot of the simulator (reduced for the web).

Download the simulator here. Source code and some sample code fragments are included.

6. Applications

It should be evident from the above that a single MISC processor is slower than the RISC or CISC equivalents in almost all but the most trivial cases. However, the extreme simplicity allows ultra-parallel implementations, similar to the approach taken by Cray Computing, which uses hundreds of parallel RISC processors in its machines.

Consider that a Pentium of recent vintage has on the order of 50 million transistors. If each F-4 requires 5,000 transistors (a conservative estimate to allow for the "glue logic" and caching necessary in such a highly parallel structure), then in principle 10,000 of them could fit on one die and execute simultaneously.

Such an array would be programmed in a manner not unlike user-configurable microcode processors like the Intel IXP-1200, which has 6 microengines with 2k of microstore each.

This type of configuration allows custom SIMD-type instructions to be invented and discarded on an ad-hoc basis as needed by a program. Here are some practical applications for the real world:

*

Large matrix solutions and spatial transformations
*

Neural nets (each F-4 owns some localized neural space)
*

Data compression
*

Memory swapping
*

Garbage collection in LISP-type languages
*

Text and virus pattern search
*

3D rendering and graphics/image rendering
*

Audio data processing
*

Interrupt and I/O service handling

Although it could easily be built on an FPGA chip as a demo, I plan to use the F-4 concept for experiments in genetic programming instead. Having a small number of instructions is advantageous and in a sense mimics the basic structure of the DNA molecule, which has four basic proteins.










© 2001-2008 DAK Engineering.

All rights reserved.

This page last updated on August 14, 2008 .

Email: admin@dakeng.com

comedy

http://www.norcalblogs.com/post_scripts/2009/03/womens_right_to_vote_the_1.html

arnold I voted for him

http://fora.tv/2009/03/12/Gov_Arnold_Schwarzenegger_on_Californias_Budget_Crisis#Schwarzenegger_California_Capital_Feeds_on_Dysfunction

chopstix

http://www.fourhourworkweek.com/blog/2009/03/30/how-to-use-chopsticks/

successful lisp

http://cl-cookbook.sourceforge.net/
http://psg.com/~dlamkins/sl/contents.html

learn the ways of the bash array force

http://mywiki.wooledge.org/BashFAQ/095

I'm getting "Argument list too long". How can I process a large list in chunks?

First, let's review some background material. When a process wants to run another process, it fork()s a child, and the child calls one of the exec* family of system calls (e.g. execve()), giving the name or path of the new process's program file; the name of the new process; the list of arguments for the new process; and, in some cases, a set of environment variables. Thus:

*

/* C */
execlp("ls", "ls", "-l", "dir1", "dir2", (char *) NULL);

There is (generally) no limit to the number of arguments that can be passed this way, but on most systems, there is a limit to the total size of the list. For more details, see http://www.in-ulm.de/~mascheck/various/argmax/ .

If you try to pass too many filenames (for instance) in a single program invocation, you'll get something like:

*

$ grep foo /usr/include/sys/*.h
bash: /usr/bin/grep: Arg list too long

There are various tricks you could use to work around this in an ad hoc manner (change directory to /usr/include/sys first, and use grep foo *.h to shorten the length of each filename...), but what if you need something absolutely robust?

Some people like to use xargs here, but it has some serious issues. It treats whitespace and quote characters in its input as word delimiters, making it incapable of handling filenames properly. (See UsingFind for a discussion of this.)
The most robust alternative is to use a Bash array and a loop to process the array in chunks:

*

# Bash
files=(/usr/include/*.h /usr/include/sys/*.h)
for ((i=0; i<${#files[*]}; i+=100)); do
grep foo "${files[@]:i:100}" /dev/null
done

Here, we've chosen to process 100 elements at a time; this is arbitrary, of course, and you could set it higher or lower depending on the anticipated size of each element vs. the target system's getconf ARG_MAX value. If you want to get fancy, you could do arithmetic using ARG_MAX and the size of the largest element, but you still have to introduce "fudge factors" for the size of the environment, etc. It's easier just to choose a conservative value and hope for the best.

plan9 awesome comedy

uriel: oiled_muscle: zfs and dtrace are idiotic braindamage for a system that is so bloated in complexity beyond all imagination that long ago reached the limits of human comprehension

(01:38:50) uriel: according to plan9 people opensolaris people are morons that have their heads so up their asses that they look like a Klein bottle
(01:39:39) uriel: as for desktop, I got one made of wood, thank you very much

Monday, March 30, 2009

Bill clinton incredible damage to eoncomy

COINCIDENCE! Robert Rubin, as Treasury Secretary under Clinton, blocked regulation of derivatives in 1997, then negotiated the repeal of the Glass-Steagal Act in 1999, making Citigroup one of the largest financial instutions in the World. Three months later he joined Citigroup's Management. (nytimes.com)

Billy also let the governmetn direct mortgages to poor who could not affoard them resulting in the mortgage debacle.

"The O'Reilly Factor" will mark his 100th consecutive month as the No. 1-rated cable news show.

You can't deny Bill O'Reilly's success. On Tuesday, the fiery host of Fox News Channel's "The O'Reilly Factor" will mark his 100th consecutive month as the No. 1-rated cable news show.

http://tv.yahoo.com/the-o-39-reilly-factor/show/32608/news/urn:newsml:tv.reuters.com:20090330:us_oreilly__ER:48523

common lisp intro book

http://homepage.mac.com/svc/

Saturday, March 28, 2009

opensolaris tips

http://www.c0t0d0s0.org/archives/4073-Less-known-Solaris-features-RBAC-and-Privileges-Part-1-Introduction.html

http://www.c0t0d0s0.org/archives/4844-Less-known-Solaris-features-pfexec.html

f you want to know what the different profiles can do, look at /etc/security/prof_attr

http://ultravioletos.blogspot.com/2008/05/opensolaris-and-blastwave.html

prstat -s size

look at those tits

http://www.youtube.com/watch?v=M-XLjwz4i-Y&NR=1

wonder woman aint shit

http://www.youtube.com/watch?v=vnm5wUBQTs8&feature=related

shyla stylez

http://www.youtube.com/watch?v=ujos5WOzQQI

Friday, March 27, 2009

dooku

http://www.youtube.com/watch?v=lIWm1GSHJ2o

violence makes anything entertaining

http://www.youtube.com/watch?v=Jqw3pjCn8ik&feature=PlayList&p=729F16EE4336AAB3&playnext=1&playnext_from=PL&index=5

amazon ninjas vs guy 1978 shit

http://www.youtube.com/watch?v=v--faJxevjA&NR=1

opensolaris vid

http://webcast-west.sun.com/interactive/09B12442/index.html

holy shit

http://www.youtube.com/watch?v=iXvK0EbA80k&NR=1

woa 80s scifi chix

http://www.youtube.com/watch?v=ZCvk547NqFU&feature=related

holy shit light sbaer not star wars

http://www.youtube.com/watch?v=sC-dwjrinK0&NR=1

coreserver lisp power

http://labs.core.gen.tr/#databaseprogramming

yaow

http://www.youtube.com/watch?v=pzfuNSpP0RA&feature=related

PrevalenceSkepticalFAQ

http://www.prevayler.org/old_wiki/PrevalenceSkepticalFAQ.html

galaxina woa

http://www.youtube.com/watch?v=KbfkdfUip1E&feature=related

holy shit buck rogers babes

http://www.youtube.com/watch?v=WRGx1Evmi4U&feature=related

dtrace 1 liners opensolaris

http://www.solarisinternals.com/wiki/index.php/DTrace_Topics_One_Liners#I.2FO

Peter Thiel

http://en.wikipedia.org/wiki/Peter_Thiel

Forbes biggest private corps

http://www.forbes.com/business/lists/2008/21/privates08_Ma-Labs_4655.html

object prevalence

http://common-lisp.net/project/cl-prevalence/

samsung Revenue ▲ $174.2 billion[2] (2007)

http://en.wikipedia.org/wiki/Samsung_Group

nomarriage.com

http://www.metroactive.com/papers/metro/10.07.99/cover/divorce-9940.html

Marriage-Go-Round: Palo Alto family law attorney Bonnie Sorensen says complications in the Valley's costly boom-time divorces are at an all-time high.

For Better Or for Worth

How splitting couples in Silicon Valley are carving out new territory in divorce court

By Will Harper

ABRAHAM MA AND Judy Liu first "met" over the phone. Through months of business-related chitchat, the pair grew to appreciate each other's voices and personalities before they had ever laid eyes on each other. What began as a friendly business acquaintance discussing Ma Labs' CPUs and components and how they would benefit Judy's Texas company gradually evolved into longer conversations. Within a two-year period, their business relationship blossomed into a friendship. They spent an hour on the phone practically every day discussing the details of their lives, sharing witticisms and inside jokes, talking shop.

They saw each other infrequently. They did manage to get some face time at a couple of computer conventions, but for the most part they conducted their relationship by phone.

Then in late 1990, with Judy Liu's marriage falling apart, Ma suggested that she move out to San Jose. She did. It soon became clear that he wanted to be more than friends.

Four days after Christmas, the pair took a bus trip to Reno. On the way, Abraham asked Judy to be his wife. She accepted his proposal. The two became not only a married couple but also business partners in Ma's struggling 7-year-old company, a fledgling components maker with about 20 employees.

At the time Ma Laboratories' future looked promising, but uncertain. The company had an estimated value of $273,000.

But the couple, energized by their marriage, went to work. During their four years together, Ma and Liu worked horrific, Jolt-guzzling Silicon Valley hours--9am to 8pm--to make the company a success.

Ma provided the technical expertise. Liu, her attorney says, was the people-person with a high school education who dealt with the vendors, negotiated increasing lines of credit with banks and stopped fistfights at work.

The couple's hard work ultimately paid off. In 1994 the Business Journal recognized Ma Labs as Silicon Valley's largest privately held company. That year the company had $560 million in sales--100 times more than five years before. Ma and his wife earned combined salaries approaching $2 million and they drove around in a new Lexus 400.

But behind their platinum-card exterior, there were troubles in the marriage. The couple, both in their 40s, had become frustrated with their unsuccessful attempts to conceive a child, Judy would later say in court. They repeatedly tried in vitro fertilization without any luck. Ma badly wanted children, an heir to the couple's growing fortune.

Then, in early 1995, Judy made a discovery that brought the marriage crumbling down.

During a trip to Taiwan to attend her father's funeral, Liu tried repeatedly to call her husband one night, but he wasn't home. She became suspicious. When she returned, she wondered what happened to the monthly $48,000 interest payment the couple received from the company. Her attorney says that she eventually learned that her husband had given the money to his mistress--a woman who used to work at Ma Labs--so she could buy a home in China.

In a court affidavit, Liu claims that once she uncovered the affair, Ma actually asked her if it would be all right to let his girlfriend have his baby and come live with them in their Milpitas home. Liu declined the offer and started a lengthy and high-stakes court battle to divide the vast Ma estate, an estate symbolic of the success stories of the valley and the complexities that go with divvying them up.

It has taken, so far, four years and more than $2 million in attorney and expert fees--mostly shelled out by Ma, who reports a gross monthly income of $383,752. In the crossfire, things in this case have turned nasty.

In response to her revelations about his female-roommate request, Ma has accused his ex-wife of misappropriating hundreds of thousands of dollars from the company when they were married, and is suing her in a separate case for fraud. She, in turn, has accused her ex-husband of fudging his numbers for the purposes of tax evasion and bribing a witness.

Ultimately, what the court fight comes down to is money. Millions of dollars are at stake.

Liu wants her share of Ma Labs, a company, her lawyer argues, that skyrocketed in value during the marriage because of Liu's contributions. Liu's accountant pegs the company's value at the date of separation in February 1995 at $60 million. Ma's accountant counters that it was more like $18 million. A neutral court-appointed bean-counter came down in between the two estimates at $30 million. (Ma Labs' website boasts that the company's current annual revenues exceed $740 million.)

"It's a classic Silicon Valley case," says Shawn Leuthold, Liu's attorney. "This company rose from a small kernel of about $270,000 to a huge value of $60 million in five years. Where else but Silicon Valley can you see a company increase in value so much in such a short time? That's not the kind of growth you're going to see in another company or industry, but in Silicon Valley it's the norm."

DIVORCES LIKE the Ma case--complicated, big-stake estate battles--are becoming more common in this decade than ever before, traveling into new and often uncharted territory in divorce court.

"We started seeing the big blowup [in estate values] three to five years ago," recalls Palo Alto family law attorney Bonnie Sorensen, who regularly represents people in the high-tech industry. "At this point the $1 million, $2 million case isn't one you talk about very much ... it's the person who comes in with the $16 million, or the $40 million, or the two people who each have got the multimillion-dollar estates that we talk about now."

It's divorce Silicon Valley-style.

Where couples fight over the custody of stock options, start-ups and intellectual property rights as they would over their kids. Where the word "move-aways"--the cash-poor spouse hightails it out of the land of the $3 latte for Sanka territory--is used casually by family law attorneys. Where a "forensic accountant," a financial expert who testifies in court, can bill for $500,000 worth of hours (as in the Ma case) after navigating through a maze of cash, bank accounts and business assets. Where the numbers game gets so wacky that math-challenged judges must hire a neutral financial expert to advise them (which Judge James Stewart did in the Ma case).

"Everything we see now is so much more complicated," explains Stewart, who retired last month after serving eight years as a family court judge.

If there is any good news in this mess, it is that fewer couples are filing for divorce in Santa Clara County these days. According to County Clerk Steve Love, the number of divorce filings has gradually dropped over the last two decades. In 1980, 10,905 couples filed for divorce here. By 1998 that number had dropped to 7,509 even though the area's population had increased 31.7 percent since 1980.

At the same time, according to family court Judge Jerald Infantino, about 90 percent of divorces are resolved before they get to trial, thanks largely to the courts' aggressive efforts to get couples into mediation.

But legal sources say the remaining 10 percent of divorce cases that do make their way to trial are more contentious and complicated than ever before. The old adage that money troubles are the leading cause of divorce has a new twist: too much of it can cause problems, too. Especially when it comes after years of scrimping and saving, and just when everyone least expected it, as in the case of David and Iris Cheriton.

IN THE 14TH YEAR of their marriage, in 1994, David and Iris Cheriton and their four children lived what Judge Mary Ann Grilli called "a middle-class lifestyle." He was a tenured Stanford computer science professor who earned around six figures from teaching and consulting. She taught piano lessons part time in their Palo Alto home when she wasn't taking care of the kids. They drove a 1985 Vanagon, and their colonial-style Cowper Street home was, as Judge Grilli put it, "in serious need of repair."

One fall morning David Cheriton told his wife that he wanted to end the marriage (according to a friend of Iris', he broke the news only one hour after she had found out that her mother had died). He filed for divorce shortly thereafter.

Less than a year later, David Cheriton took a leave of absence from Stanford to work at a startup he co-founded with Andreas Bechtolscheim, one of the founders of Sun Microsystems. The pair's nascent computer networking company soon caught the attention of Cisco Systems, which bought Granite for $220 million in April 1996.

As part of the buyout, Cisco granted the 45-year-old professor an option to buy stock with a market value of around $45 million.

Another overnight Silicon Valley decamillionaire.

After her husband's incredible windfall, Iris decided that she and the kids should get a piece of the action. During their marriage, she told the court, her husband was a notorious cheapskate. Even though he was the obvious breadwinner with a university job, Iris said he insisted that his wife pay for the family's groceries and split the costs in repairing their home after the 1989 quake, using the meager income from her piano lessons. While he drove the newer Vanagon, she was forced to drive a beat-up 1970 van, which a family friend described as "the most dilapidated, unsafe vehicle I've ever seen."

But there was a significant obstacle to Iris and the kids' sharing the newfound wealth: By June 1998, David Cheriton had chosen to cash in or "exercise" only $9.7 million dollars worth of his stock options. The Stanford prof was still sitting on an estimated $40 million goldmine, which he could exercise when he felt good and ready. Iris' attorneys argued that the options should be made available for spousal and child support right away because they were vested and exercisable. And Cheriton was sitting on them, they suggested, to keep them away from his family.

In a controversial June 1998 decision, Judge Grilli disagreed with Iris' attorneys. While she ruled that options could indeed be used in calculating support, she said that those calculations couldn't be made until David Cheriton actually exercised his options.

Her decision, currently under appeal, is being watched by local attorneys eager to see which way the ruling goes on such an increasingly typical component of the Silicon Valley divorce settlement.

With so many cash-poor startups using stock options to sweeten compensation packages, these options are expected to be the battleground of the future in Santa Clara County divorces.

"We have some pretty sophisticated financial issues here that probably don't get developed in the same way in other areas," observes family law attorney Sherry Cassedy of the law firm Lakin Spears in Palo Alto. "I don't have many divorce cases anymore where there's not stock options involved. And often both the husband and wife have options. ... Stock options are becoming an everyday part of our practice. Elsewhere around the country, they're probably novel."

Because stock options are a relatively new animal in family law--the first California case hit the appellate courts only 15 years ago--there isn't much guidance from the higher courts yet. The trial courts have lots of discretion, which isn't always a good thing, because option disputes can get so complex. When the options are granted and when they vest--was it during the marriage or after?--and why they were granted in the first place--as a reward for past performance or an incentive to stay?--can all affect who gets what.

A new fight is emerging over what mathematical formula should be used to calculate how to divide options that vest gradually over time--which, in big-stake cases, can mean a difference of millions of dollars.

And because the case law around stock options is still evolving, Silicon Valley trial court judges find themselves often operating at ground zero on questions like the one posed in the Cheriton case. "Until we can get some law on this," says now-retired family court Judge James Stewart, "we're all going in the dark."

STOCK OPTIONS ARE hardly the only factor in what makes divorce in Silicon Valley unique. Attorney Bonnie Sorensen tells a story about a client who, like so many other successful professionals in Silicon Valley, decided to take an early retirement with his millions at age 50. The wife, not content with splitting the $13 million estate, wanted more money in spousal and child support. Her attorneys argued that her estranged husband retired prematurely and could still be working. Therefore, she is asking the court to calculate spousal and child support as if her ex-hubby were still going to the office and making $500,000 a year.

Then there are the couples who co-found a start-up, slave to make it a success and then decide to split up. Sometimes, family attorneys say, the couple tries to continue their business partnership for a while, only to find out working side-by-side with one's ex isn't such a great idea. At that point, the two must battle over who gets control of the company.

"We had a case," recalls Palo Alto attorney Cassedy, "where a couple who founded a startup company that went public later decided to get divorced. They really had to decide who would continue with the company and who would leave. One of them was the technical expert; the other was the marketing person. What they had to look at was who was more important to the company's future." Ultimately, the technical expert--who was the husband--stayed with the company.

Cassedy has another case now involving a new venture capitalist who got into the biz during the final year of his marriage. In that year, he only had time to complete the first step involved in the process: raising money. The actual investing in the start-ups and the reaping of possible millions are still several years away, Cassedy says. Nevertheless, the wife argues that she should get a share of the future millions because he formed the venture partnership during the marriage even if he didn't actually invest any cash during that time.

Cassedy predicts the case will have to go to trial.

Any discussion of divorce Silicon Valley-style can't omit mention of intellectual property. The value of an idea can mean big money in a divorce. Often the important question comes down to this: When did the spouse come up with a moneymaking idea? If the proverbial light bulb went off during the marriage, the other spouse is entitled to a share under California community-property law if the idea hits IPO paydirt, even after separation.

"I just talked to a client the other day," Cassedy says, "whose ex-husband's company just went public. 'And, by the way,' she says, 'he has this other idea.' The thing is, How do you prove that he had the idea before separation?"

ON TOP OF THE WINDFALLS of corporate success, another item in Silicon Valley that has crept out of control and raised the ante in divorce is real estate. What was once a slam-dunk decision to sell the family home and split the proceeds is no longer a simple--or affordable--matter. Even with high profits in the current market, one home's sale doesn't necessarily result in enough money for the purchase of two new homes. The result is what family court professionals call the "move-away." Matthew Sullivan, a family court specialist who generally handles custody cases involving upper-income households, says the move-away is fast becoming one of the most common and disturbing trends in Silicon Valley divorces.

The typical move-away goes like this: Couple files for divorce. During the court proceedings, the spouse who was the homemaker or made a modest living can no longer afford to live in Silicon Valley--where the median home price in the county is now $400,000, and $750,000 in an upscale city like Palo Alto.

"The mom, for instance, will decide, 'I don't want to live here, it's too expensive. I'm going back to Iowa,' " Sullivan says.

The mom in that kind of case, Sullivan says, will often get custody of the kids because the dad is working crazy Silicon Valley 80-hour workweeks with no extra time for parenting. The kids--on top of the stresses of their parents splitting up--are then subjected to the trauma of moving away from friends and schools and familiar things, which is often just as disruptive as the divorce itself.

Move-aways can also be especially devastating for the workaholic parent, Sullivan suggests, who often has been laboring under the notion that his unhealthy workload will be justified after he hits the jackpot because he can then spend all the time he wants with his kids and wife.

In the meantime, while he waits to strike it rich, the marriage falls apart. "Obviously it's a gamble," Sullivan says. "Even if you do strike it rich, it's not like a father and husband instantly emerge."

There are other variations on the move-away theme.

Another common situation Silicon Valley family lawyers and real estate brokers talk about is where husband and wife find that neither of them can afford to keep the home after the divorce.

"In this area, it usually takes two wage-earners," says Carl San Miguel, president-elect of the San Jose Real Estate Board, "to qualify to buy a house. If one of them leaves [because of a divorce], it's very difficult to qualify for the same house with just one wage-earner."

As a result, the couple must put their home on the market. Where do they go while they wait for a buyer? "In some cases," attorney Lynne Yates-Carter says, "I've got clients who have gone to live with their parents."

In other cases, the move-away actually gets turned on its head. "I'm seeing more people," says Palo Alto attorney Bonnie Sorensen, "who continue to live together post-separation until the house sells because they can't afford to live anywhere else."

BELIEVE IT OR NOT, there are some happy endings here. Or at least as happy as can be expected in a divorce.

While the presence of big bucks can make divorce a messy nightmare, there are times when money simplifies things. "You see it go both ways," says Michael Flicker, a Peninsula-based attorney with many high-tech clients. "Often the wife, who tends to be the one not making the big bucks, accepts less than she could get because she'll ask herself, 'How much do I really need?' "

And it's important to note other positive trends, family law attorneys say, such as the fact that 90 percent of divorce cases in the valley are settled before things get too nasty and end up in a trial. Sherry Cassedy also takes heart from the declining number of people filing for divorce in Santa Clara County, which has seen a 31 percent decline since 1980.

"What I'm finding are couples now that understand divorce is not an easy out," Cassedy says. "They see that divorce requires a long process of separation and all these financial dealings, especially when kids are involved." She adds that she comes across more couples nowadays who are trying to work things out rather than hit the eject button on the marriage.

But she also has noticed a disturbing new phenomenon in Silicon Valley divorces. While it's true that the majority of cases settle before they end up in a messy trial, Cassedy says that for the approximately 10 percent that do go to Defcon 1, the battle is nastier than ever with so much more money at stake. "The hotly contested litigation is on the uprise," Cassedy says. "The middle ground is getting lost."

big brother traffic cameras communism

http://online.wsj.com/article/SB123811365190053401.html?mod=yhoofront

By WILLIAM M. BULKELEY

The village of Schaumburg, Ill., installed a camera at Woodfield Mall last November to film cars that were running red lights, then used the footage to issue citations. Results were astonishing. The town issued $1 million in fines in just three months.

But drivers caught by the unforgiving enforcement -- which mainly snared those who didn't come to a full stop before turning right on red -- exploded in anger. Many vowed to stop shopping at the mall unless the camera was turned off. The village stopped monitoring right turns at the intersection in January.

Once a rarity, traffic cameras are filming away across the country. And they're not just focusing their sights on red-light runners. The latest technology includes cameras that keep tabs on highways to catch speeders in the act and infrared license-plate readers that nab ticket and tax scofflaws.

View Full Image
red-light cameras and road rage
Associated Press

Vehicles drive past a speed surveillance and ticketing camera on a road heading into downtown in Cleveland, Ohio, earlier this month. The cameras measure the speed of passing motorists and make a photo of the car's license plate so the city can mail a speeding ticket to the offending driver.
red-light cameras and road rage
red-light cameras and road rage

Drivers -- many accusing law enforcement of using spy tactics to trap unsuspecting citizens -- are fighting back with everything from pick axes to camera-blocking Santa Clauses. They're moving beyond radar detectors and CB radios to wage their own tech war against detection, using sprays that promise to blur license numbers and Web sites that plot the cameras' locations and offer tips to beat them.

Cities and states say the devices can improve safety. They also have the added bonus of bringing in revenue in tight times. But critics point to research showing cameras can actually lead to more rear-end accidents because drivers often slam their brakes when they see signs warning them of cameras in the area. Others are angry that the cameras are operated by for-profit companies that typically make around $5,000 per camera each month.

"We're putting law enforcement in the hands of third parties," says Ryan Denke, a Peoria, Ariz., electrical engineer who has started a Web site, Photoradarscam.com, to protest the state's speed cameras. Mr. Denke says he hasn't received a ticket via the cameras.
[red-light cameras phoenix]

Protests over the cameras aren't new, but they appear to be rising in tandem with the effort to install more. Suppliers estimate that there are now slightly over 3,000 red-light and speed cameras in operation in the U.S., up from about 2,500 a year ago. The Insurance Institute for Highway Safety says that at the end of last year, 345 U.S. jurisdictions were using red-light cameras, up from 243 in 2007 and 155 in 2006.

One traffic-cam seller, Arizona-based American Traffic Solutions Inc., recently reported it had installed its 1,000th camera, with 500 more under contract in 140 cities and towns. Rival Redflex Holdings Ltd. says it had 1,494 cameras in operation in 21 states at the end of 2008, and expects to top 1,700 by the end of this year.

Municipalities are establishing ever-more-clever snares. Last month, in a push to collect overdue taxes, the City Council in New Britain, Conn., approved the purchase of a $17,000 infrared-camera called "Plate Hunter." Mounted on a police car, the device automatically reads the license plates of every passing car and alerts the officer if the owner has failed to pay traffic tickets or is delinquent on car taxes. Police can then pull the cars over and impound them.

New Britain was inspired by nearby New Haven, where four of the cameras brought in $2.8 million in just three months last year. New Haven has also put license-plate readers on tow trucks. They now roam the streets searching for cars owned by people who haven't paid their parking tickets or car-property taxes. Last year 91% of the city's vehicle taxes were collected, up from "the upper 70s" before it acquired the technology, says city tax collector C.J. Cuticello.
video
'Smart Intersections' Coming to a Street Near You
3:16

WSJ's Stacey Delo explores efforts to develop "smart intersections" which advocates hope can create a better informed driver and safer roads.

Not that it's been smooth sailing. Mr. Cuticello recalls the time he tried to help tow the car of a woman who owed $536. She knocked him over, jumped in the car and drove away. She was later arrested for a hit-and-run.

City leaders have generally maintained that while revenue is a welcome byproduct of traffic citations, the laws are in place to improve public safety or reduce accidents.

But a study in last month's Journal of Law and Economics concluded that, as many motorists have long suspected, "governments use traffic tickets as a means of generating revenue." The authors, Thomas Garrett of the St. Louis Fed and Gary Wagner of the University of Arkansas at Little Rock, studied 14 years of traffic-ticket data from 96 counties in North Carolina. They found that when local-government revenue declines, police issue more tickets in the following year. Officials at the North Carolina Association of Chiefs of Police didn't respond to requests for comment.

George Dunham, a village trustee in Schaumburg, says installing the red-light camera at the mall "wasn't about the revenue -- no one will believe that, but it wasn't." On the other hand, he says, with fuel taxes and sales taxes falling, its retreat on the camera has had a "painful" impact on Schaumburg's $170 million budget.

View Full Image
Traffic camera
Associated Press

An intersection in Jackson, Miss., bears a sign warning drivers that the intersection is being photographed for possible traffic violations.
Traffic camera
Traffic camera

Cameras to catch speeders on highways, which are common in Europe, are just starting to spread in the U.S. Last June, Arizona added a provision for speed cams on highways to its budget bill, with an anticipated $90 million in fines expected to help balance the budget.

State police started placing the cameras on highways around Phoenix in November. In December, a trooper arrested a man in Glendale while he was attacking a camera with a pick ax. In another incident, a troupe of men dressed as Santa Claus toured around the city of Tempe in December and placed gaily wrapped boxes over several traffic cameras, blocking their views. Their exploits have been viewed more than 222,000 times on YouTube.

Republican state representative Sam Crump has introduced a bill in the legislature to remove the cameras, which he says were approved "in the dead of night...as a budget gimmick."

In the meantime, the cameras are still being rolled out, and have already issued more than 200,000 violation notices since September. They are set to take a picture of cars going more than 11 miles over the speed limit, and they also photograph the driver.

Some entrepreneurs are trying to help camera opponents fight back. Phantom Plate Inc., a Harrisburg, Pa., company, sells Photoblocker spray at $29.99 a can and Photoshield, a plastic skin for a license plate. Both promise to reflect a traffic-camera flash, making the license plate unreadable. California passed a law banning use of the spray and the plate covers, which became effective at the beginning of this year.

A free iPhone application available on Trapster.com lets drivers use their cellphones to mark a traffic cam or speed trap on a Google map. The information on new locales is sent to Trapster's central computer, and then added to the map.

Other anti-cam Web sites counsel people to examine the pictures that come in the mail with citations. If the facial image is too blurry, they say, drivers can often argue successfully in court that no positive identification has been made of them.

Studies are mixed on whether traffic cameras improve safety. Some research indicates they may increase rear-end collisions as drivers slam on their brakes when they see posted camera notices. A 2005 Federal Highway Administration study of six cities' red-light cameras concluded there was a "modest" economic benefit because a reduction in side crashes due to less red-light running offset the higher costs of more rear-end crashes.

A study of crash causes released by the National Highway Traffic Safety Administration last July found about 5% of crashes were due to traveling too fast and 2% were from running red lights. Driving off the side of the road, falling asleep at the wheel and crossing the center lines were the biggest causes identified.

Write to William M. Bulkeley at bill.bulkeley@wsj.com

forth computers prevalence

http://ccreweb.org/software/kforth/kforth2.html
http://thinking-forth.sourceforge.net/
http://home.iae.nl/users/mhx/sf.html
http://www.webring.com/hub?ring=forth
http://ccreweb.org/documents/programming/html/Forth-Presentation/text7.html
http://ccreweb.org/documents/dpans94/dpans.htm
http://www.taygeta.com/fsl/sciforth.html
http://www-personal.umich.edu/~williams/archive/forth/forth.html
http://ccreweb.org/software/kforth/kforth4.html
http://ccreweb.org/software/kforth/kforth1.html
http://wiki.squeak.org/squeak/3043
http://labs.core.gen.tr/#index

Klaus Wuestefeld

KlausWuestefeld

This is a rough rendering of a page from the old Prevayler wiki. Please see the new wiki for current documentation.
Father of Julia and Thomas. @:)

Programmer since 1983.
Professional Programmer since 1992.
Object Orientation enthusiast since 1994.
Java Programmer since 1998.
Extreme Programming enthusiast since 2000.

Author of Prevayler and the PrevalenceSkepticalFAQ.

eMail: klauswuestefeld at users dot sourceforge dot net

I do not claim the creation of the transaction log + snapshot concept. Databases have forever used transaction logs for persistence purposes. Several multiplayer games use the concept of publishing the users' "transactions" and having a replica for every user calculating the entire game state.

What I did was to recognize its full potential, give it a name (SystemPrevalence), provide a free implementation (Prevayler) and start a community around it. @:)

KlausWuestefeld



Klaus,

I have put some (constructive) criticism of the Prevayler.org site over on my blog.
http://www.cardboard.nu/archives/000035.html

-- Alan

Into to forth for scientists and engineers

http://ccreweb.org/documents/programming/html/Forth-Presentation/text0.html

PrevaylerIsNotADatabase

PrevaylerIsNotADatabase

This is a rough rendering of a page from the old Prevayler wiki. Please see the new wiki for current documentation.
Relational database management systems restrict you to using rows and columns and ALL database management systems, including OODBMSs:
-worry about RAM limitations;
-worry about paging data blocks from RAM to disk and back;
-restrict you to using query algorithms, data structures and query languages (SQL, OQL) that must work well on disk clusters.
Prevayler does not.

It is, therefore, MUCH simpler (which makes it much more robust) and orders of magnitude Faster (no JDBC overhead).

You are finally free to write a decent object server, the way OO was intended since the beggining. See: NoMorePorridge.

--KlausWuestefeld.

See: PrevalentHypothesis.

Why isn't Prevayler a database? Granted it's not an RDBMS, but it does look like a DBMS. --DafyddRees

Yes, Prevayler is a DBMS, as it manages command-logs and snapshots, which are nothing but object stores, and objects are made mainly of data. But, in practice, it's easier to explain how it works by reverse logic: first negating that Prevayler is a database management system, thus removing all assumptions that this statement makes, and then explaining what Prevayler is. The conclusion is that Prevayler is a DBMS, but you don't have to keep saying "no, it doesn't have SQL", "no, no JDBC", and things like that @:) -- CarlosVillela

Prevayler is not a database. Read the beggining of the page. --KlausWuestefeld

At what point doesn't Prevayler worry about RAM limitations? Surely worrying about RAM limitations is one of the primary concerns of Prevayler?

No. Prevayler assumes the PrevalentHypothesis. Databases do not. --KlausWuestefeld

A "database" does not have to be on or assume disk. RAM-centric RDBMS are in the works.

Prevayler provides persistence. RAM-centric "databases" don't. I wouldn't even call them "databases". I would call them RAM-centric SQL engines. --KlausWuestefeld

Also, "database" does not have to be a relational database. The "navigational" databases of the 1960's are an example, as is OODBMS's (which some say are repackaged navigational DB's.) Thus, if we ignore the RAM issue, don't assume just relational ("must fit a table shape"), then exactly how is a prevalance layer different from a database? It is a sub-set of databases? Perhaps it can be defined as a "database based in RAM that does not have to be relational".

Well, if you "ignore" all the differences between Prevayler and a database then I suppose you can make yourself believe Prevayler is a database. --KlausWuestefeld

"No. Prevayler assumes the Prevalent Hypothesis. Databases do not."

You have got to be kidding. Further comment moved to http://fishbowl.pastiche.org/archives/001295.html
Clearly, Klaus has a different definition of database than the rest of the world. Which, btw, he is welcome to do. However, it would make him seem more intelligent and less like man selling snake oil if he would define it and explicitly spell out "all the differences between Prevayler and a database ".

Also, the idea that any software is not RAM dependant on any current hardware architectures displays a nievity about how a computer works. Software is highly RAM dependant, period. You can make some fantastic hypothisis about there being an inifinite amount of RAM (hell, you can even say that your CPU is infinitely fast), but when the software runs, it is dependant on there being enouh RAM to keep from swapping, that the RAM is error free, and the speed of the memory bus. It's also highly dependant on the cpu cache. Of course, it is dependant on a whole lot of other hardware concepts, but the previously mentioned items are particularly important for this type of software solution.

Tuesday, March 24, 2009

OBJECT PREVALENCE Posted 23 Dec 2001 at 03:46 UTC by KlausWuestefeld

http://www.advogato.org/article/398.html

OBJECT PREVALENCE
Posted 23 Dec 2001 at 03:46 UTC by KlausWuestefeld

Transparent Persistence, Fault-Tolerance and Load-Balancing for Java Systems.


Orders of magnitude FASTER and SIMPLER than a traditional DBMS. No pre or post-processing required, no weird proprietary VM required, no base-class inheritance or clumsy interface definition required: just PLAIN JAVA CODE.


How is this possible?



Question: RAM is getting cheaper every day. Researchers are announcing major breakthroughs in memory technology. Even today, servers with multi-gigabyte RAM are commonplace. For many systems, it's already feasible to keep all business objects in RAM. Why can't I simply do that and forget all the database hassle?

Answer: You can, actually.




Are you crazy? What if there's a system crash?

To avoid losing data, every night your system server saves a snapshot of all business objects to a file using plain object serialization.




What about the changes occurred since the last snapshot was taken? Won't the system lose those in a crash?

No.




How come?

All commands received from the system's clients are converted into serializable objects by the server. Before being applied to the business objects, each command is serialized and written to a log file. During crash recovery, first, the system retrieves its last saved state from the snapshot file. Then, it reads the commands from the log files created since the snapshot was taken. These commands are simply applied to the business objects exactly as if they had just come from the system's clients. The system is then back in the state it was just before the crash and is ready to run.




Does that mean my business objects have to be deterministic?

Yes. They must always produce the same state given the same commands.




Doesn't the system have to stop or enter read-only mode in order to produce a consistent snapshot?

No. That is a fundamental problem with transparent or orthogonal persistence projects like PJama (http://www.dcs.gla.ac.uk/pjava/) but it can be solved simply by having all system commands queued and routed through a single place. This enables the system to have a replica of the business logic on another virtual machine. All commands applied to the "hot" system are also read by the replica and applied in the exact same order. At backup time, the replica stops reading the commands and its snapshot is safely taken. After that, the replica continues reading the command queue and gets back in sync with the "hot" system.




Doesn't that replica give me fault-tolerance as a bonus?

Yes it does. I have mentioned one but you can have several replicas. If the "hot" system crashes, any other replica can be elected and take over. Of course, you must be able to afford a machine for every replica you want.




Does this whole scheme have a name?

Yes. It is called system prevalence. It encompasses transparent persistence, fault-tolerance and load-balancing.




If all my objects stay in RAM, will I be able to use SQL-based tools to query my objects' attributes?

No. You will be able to use object-based tools. The good news is you will no longer be breaking your objects' encapsulation.




What about transactions? Don't I need transactions?

No. The prevalence design gives you all transactional properties without the need for explicit transaction semantics in your code.




How is that?

DBMSs tend to support only a few basic operations: INSERT, UPDATE and DELETE, for example. Because of this limitation, you must use transaction semantics (begin - commit) to delimit the operations in every business transaction for the benefit of your DBMS. In the prevalent design, every transaction is represented as a serializable object which is atomically written to the queue (a simple log file) and processed by the system. An object, or object graph, is enough to encapsulate the complexity of any business transaction.




What about business rules involving dates and time? Won't all those replicas get out of sync?

No. If you ask the use-case gurus, they will tell you: "The clock is an external actor to the system.". This means that clock ticks are commands to the business objects and are sequentially applied to all replicas, just like all other commands.




Is object prevalence faster than using a database?

The objects are always in RAM, already in their native form. No disk access or data marshalling is required. No persistence hooks placed by preprocessors or postprocessors are required in your code. No "isDirty" flag. No restrictions. You can use whatever algorithms and data-structures your language can support. Things don't get much faster than that.




Besides being deterministic and serializable, what are the coding standards or restrictions my business classes have to obey?

None whatsoever. To issue commands to your business objects, though, each command must be represented as a serializable object. Typically, you will have one class for each use-case in your system.




How scalable is object prevalence?

The persistence processes run completely in parallel with the business logic. While one command is being processed by the system, the next one is already being written to the log. Multiple log files can be used to increase throughput. The periodic writing of the snapshot file by the replica does not disturb the "hot" system in the slightest. Of course, tests must be carried out to determine the actual scalability of any given implementation but, in most cases, overall system scalability is bound by the scalability of the business classes themselves.




Can't I use all those replicas to speed things up?

All replicas have to process all commands issued to the system. There is no great performance gain, therefore, in adding replicas to command-intensive systems. In query-intensive systems such as most Web applications, on the other hand, every new replica will boost the system because queries are transparently balanced between all available replicas. To enable that, though, just like your commands, each query to your business logic must also be represented as a serializable object.




Isn't representing every system query as a serializable object a real pain?

That's only necessary if you want transparent load-balancing, mind you. Besides, the queries for most distributed applications arrive in a serializable form anyway. Take Web applications for example: aren't HTTP request strings serializable already?




Does prevalence only work in Java?

No. You can use any language for which you are able to find or build a serialization mechanism. In languages where you can directly access the system's memory and if the business objects are held in a specific memory segment, you can also write that segment out to the snapshot file instead of using serialization.




Is there a Java implementation I can use?

Yes. You will find Prevayler - The Open-Source Prevalence Layer, an example application and more information at http://www.prevayler.org. It does not yet implement automatic load-balancing but it does implement transparent business object persistence and replication is in the oven.




Is Prevayler reliable?

Prevayler's robustness comes from its simplicity. It is orders of magnitude simpler than the simplest RDBMS. Although I wouldn't use Prevayler to control a nuclear plant just yet, its open-source license ensures the whole of the software developing community the ability to scrutinize, optimize and extend Prevayler. The real questions you should bear in mind are: "How robust is my Java Virtual Machine?" and "How robust is my own code?". Remember: you will no longer be writing feeble client code. You will now have the means to actually write server code. It's the way object orientation was intended all along; but it's certainly not for wimps.




You said Prevayler is open-source software. Do you mean it's free?

That's right. It's licensed under the Lesser General Public License.




But what if I'm emotionally attached to my database?

For many applications, prevalence is a much faster, much cheaper and much simpler way of preserving your objects for future generations. Of course, there will be all sorts of excuses to hang on to "ye olde database", but at least now there is an option.


---------------------------------------------------------------------

ABOUT THE AUTHOR

KlausWuestefeld enjoys writing good software and helping other people do the same. He has been doing so for 17 years now. He can be contacted at klaus@objective.com.br.

---------------------------------------------------------------------

"PREVAYLER" and "OPEN-SOURCE PREVALENCE LAYER" are trademarks of Klaus Wuestefeld.

Copyright (C) 2001 Klaus Wuestefeld.

Unmodified, verbatim copies of this text including this copyright notice can be freely made.
Interesting but..., posted 23 Dec 2001 at 07:08 UTC by ncm »
There are still quite a few things we really need transactions for:
When you make the first of a series of changes to objects in the database, you typically break one or more database invariants until you get the last change entered. Other processes looking at the database had better either wait, or had better see the state it had before you started. To get much concurrency, you need to snapshot the state before the first change.
If you get halfway through a series of changes and crash, the system had better come back up without the changes you made, because you're not going to be equipped to continue where you left off.
If you get halfway through a series of changes and discover some condition that keeps you from finishing, you had better be able to just drop the changes and pick up with the original snapshot.
If N processes make a series of conflicting changes concurrently, (N-1) of them had better be told that their changes have failed, and that they must try again.
There's a reason that databases are written by career professionals. A simple object database can be really useful, but that doesn't make it a substitute for the real thing. That's part of why so many "object database" companies failed some ten years back.
Transactions, posted 23 Dec 2001 at 09:03 UTC by Pseudonym »

Actually, transactions are not so important in the external interface of an OODBMS. In an RDBMS, a manipulation typically involves several SQL statements (e.g. insert, update, remove) each of which can act on only one table at a time. So if a transaction needs to manipulate more than one table, you need to ensure that the set of statements is atomic by issuing a transaction.

In an OODBMS, where manipulation methods can operate on more than one class, the need is reduced somewhat. Internally, you can just queue up the command logs until the method is complete, then write them out together. Then the problem becomes entirely one of synchronisation. It's not quite ACID, but it'll do for most business applications.

You (ncm) are right, however, in that this solution, while no doubt excellent for many purposes (e.g. if you're happy with the robustness and performance of MySQL, you'll probably be happy with this, too), won't scale to many critical applications. For example, it would be quite hard to handle replication in any sane manner.

Klaus, as a matter of interest, how did you manage to get Java to force flushing to disk?
Scalability ?, posted 23 Dec 2001 at 12:03 UTC by jneves »
Is it just me, or prevayler isn't useful in anything else but a uniprocessor machine as is? You process requests one at a time, which means that there can't be 2 different requests being processed at the same time in different processors. And when you have several replicas you have to have some coordination between all replicas to insure the order of the requests. Or am I missing something here ?
Interesting but problematic, posted 23 Dec 2001 at 12:59 UTC by dannu »
Thank you (Klaus Wuestefeld) for your nice write-up. I mostly agree to the points made by the other repliers.

Let me discuss/ask some further points:

distributed systems? if a systen-prevalence-deployed application contacts other services resp. other servers you have a synchronization problem. How do you handle that? I guess, that you end up doing a 2PC-like synchronization between your prevalence servers.

fine grained tx-model versus all or nothing? with big BOs -systems there are actually lots of small transactions. the system prevalence paradigma doesn't give you a fine grained application side control, or does it? Note though that you can adapt (extended) 2PC-transactions to efficiently work RAM-based while retaining persistent storage properties (by using RAM-based subtransactions and files or RDBMS in the root transaction).


scalabity? having all "commands queued and routed through a single place" doesn't scale very well. consider one of these big 64 processor multigigabyte machines using a gigabit card: you wouldn't want all requests to be serialized through a single bottleneck which involves IO. With fine grained distributed transactions you don't need this "single place" or even a single server. I appreciate the "do it in background" approach, though, as an advance to requiring requests to be queued while saving the state. It's quite neccesary for 24/7 systems.




In my oppinion the complexity of 2PC-systems comes from shortcomings of the commercial products (BEA WLE, Websphere, Oracle etc.). They impose big clumsy quite old fashioned development schemes where the developer is restricted and has to keep track of many conditions. This partly stems from the pain with underspecified and often incorrectly implemented XA-interfaces. (e.g. writing multithreaded programs with XA-adapters from the main RDBMs is a desaster).

I think that system prevalence would help implementing web applications which are located on single systems. It is a simple enough paradigm to be used and understood by companies which often fail or are very slow with 2pc-transaction systems. Handling of error conditions (pointed out by ncm) might still be a big problem.




just my 2 (soon to be) eurocent and best wishes!

holger
I'll be back..., posted 23 Dec 2001 at 14:08 UTC by KlausWuestefeld »
THANKS A LOT for the FEEDBACK!

This is the first forum outside of my working group to actually get the idea and give me some positive feedback.

I am just leaving on a trip right now (my wife is calling me ;) for Christmas and will be back on wednesday. Then, I will address all concerns: ACID properties, error-condition recovery, scalability, the works...

Just a note on scalability and concurrency to think about over Christmas: Suppose we have a subscriber management system that receives a file from a bank with 100000 (one-hundred-thousand) payment records. A prevalent server running on a regular desktop machine can handle a command/transaction for this in less than a millisecond and be ready for the next command.

Merry Christmas! See you soon.
testing, debugging, integration, and data migration, posted 23 Dec 2001 at 19:33 UTC by jrobbins »
I used to be a professional SmallTalk programmer, I also was a professional Lisp programmer. Both of those languages use the concept of a saved memory image as part of their normal development environment.

The simplicy of "just saving the system state" is a double-edged sword. The downside is that it is often hard to specify a particular system state that you might want to use for testing or debugging. If you ever get an object into a "bad state", it can be very hard to find out how it got into that state. In contrast, the impedence mismatch between OO systems and RDBMSs provides a natural boundary and conceptual bottleneck for testing and debugging. It is realtively easy to compare 2 database dumps to see what is different, or to populate the database with test data, or to see which INSERT statement introduced a particular row into the database. You could have test data consisting of a long set of commands, but that "algebraic" approach to testing does not scale well, and allows defects in mutators to mask defects in accessors.

One thing that I learned while trying to actually sell ST-80 systems to other divisions in a large company is that IS organizations see a standard RDBMS as an integration point. If your system uses an RDMBS, they can plan capacity on a shared database machine: they can generate ad-hoc reports, they can use standard tools for disk backups and such on the database machine only. Also, in the event that your system eventaully dies (is no longer maintained, or the license is not extended or whatever) they will at least have the data in a format that they can get out of your system's tables and into some other system.

Lastly, upgrades were always a pain in image-based tools. Very incremental changes (like adding an instance variable to a class) can be handled by the serialization system. Any reoganization beyond that would require custom coding. In contrast, you can do small and mid-sized reorganizations a lot easier in SQL.
Why bother with disk at all?, posted 23 Dec 2001 at 20:23 UTC by egnor »
I'll take the opposite tack for variety:

If you're going this far, why bother with a disk at all? Just attach a battery to your RAM. If you want reliability, keep replicas. If a replica is lost, "clone" another one by freezing its message queue and copying the frozen image; the two clones can then "catch up" with the queued messages in parallel.

Copy-on-write VM tricks may soften the need to entirely freeze a replica during checkpointing.

I suspect the points raised in most of the comments can be fixed. (After all, suppose we were looking the other way. Compared to modern programming languages, databases and middleware systems have lots of horrible misfeatures, starting with bad syntax and ending with fundamentally broken models of (non-)encapsulation and (non-)reuse and (non-)genericity; the complaints in the other direction seem relatively trivial by comparison. How can any self-respecting software engineer stand to use today's RDBMS systems without feeling dirty all over?)

jrobbins's notes are the most interesting. It's worth noting that these are basically software engineering problems having to do with how to maintain long-running systems, not issues with the physical architecture proposed here. Is an RDBMS the best way to solve those software engineering problems? It's hard to believe. Are these problems worth solving for other domains? You betcha. I'd love to be able to upgrade my applications without restarting them. (Thanks to Debian, I can mostly upgrade my operating system without restarting it -- something users of e.g. Windows may have difficulty imagining.)
Relational algebra and persistence are both supposed to be simple, posted 24 Dec 2001 at 03:36 UTC by tk »
Data persistence is definitely not a new idea. In fact, if I remember correctly, persistent storage (ferrite cores) actually predate volatile storage. I guess it somehow faded away, only to emerge recently under the guise of persistent OSs such as EROS, persistent architectures such as Prevayler which we now discuss, and so on.

It's hard to see how relational algebra or persistence compare with each other. After all, relational algebra was supposed to be simple anyway -- data are nothing more than just lots of mathematical relations, right? We now know however that this `simple' idea is fraught with practical problems.

Will the same happen for persistence? Maybe, or maybe not. As jrobbins mentioned, changing the `shape' of objects is a problem, and there are probably many other problems.
XML Serialization, posted 24 Dec 2001 at 11:54 UTC by CryoBob »
I might be taking a bit of a simplistic view on the subject but couldn't alot of the issues raised by jrobbins relating to testing and having data in a useful format if a system is retired; be addressed by XML serialization. If we are going to be able to serialize all the commands and business objects why not have an option or feature to dump this information to a XML file. Then when tracking states you could do a dump at each command and compare the XML output to see where things are going wrong.

XML serialization also has the advantage of being self describing rather than in a group of tables in binary format on a database server. I mean what happens if your RDBMS company goes bust and you can't get at the data because of a licence timeout for example...

Obviously XML serialization will implement another overhead to the system, but if implemented correctly you could serialize in binary format to boost performance, and then you should you need to restore the state for investigative/testing/export purposes load the objects through an Object to XML parsing engine and look at the output.
Processing speed doesn't increase consistency, posted 25 Dec 2001 at 15:01 UTC by baueran »
Yes, you are right: in RAM a desktop machine may be able to process your 100.000 records in less than a second or something (I don't think that's representative for anything though), but I do not think that makes the system necessarily more consistent or bullet-proof. What happens if you (or any of your client applications) run into a deadlock within a millisecond? How consistent will the rest of the system and data be without an ACID paradigm to rely on? Correct me, if I'm just not getting the point, but I believe such an issue is not addressed in this approach.
trademarks?, posted 25 Dec 2001 at 18:45 UTC by dalke »
Minor point, but '"PREVAYLER" and "OPEN-SOURCE PREVALENCE LAYER" are trademarks of Klaus Wuestefeld'? I'm curious on trademarking a few things of my own, so I checked the USPTO. Neither marks are listed. Given the email address of ".br", are they only trademarked in Brazil?
I'm Back, posted 26 Dec 2001 at 18:59 UTC by KlausWuestefeld »
I agree that the Prevayler implementation, as it is today, is robust, fast and scalable enough for most applications.

In the company where I work there are 7 people working on two projects using Prevayler to be released in January. I am also glad to help any other early Prevayler adopters.

I would like to share some thoughts, though, on the use of prevalence "in the large" to make sure that we are not missing out on some very interesting possibilities.

First, I will give a few very quick, specific and UNJUSTIFIED answers, and then, in a separate comment, I will give a more complete explanation in an attempt to clarify all concerns so far...
Re: Interesting but..., posted 26 Dec 2001 at 19:10 UTC by KlausWuestefeld »
There are still quite a few things we really need transactions for: -- ncm

I apologize. Prevayler does have transactions.

Although a prevalent system can define transactions (commands) and provide them for a client to use, there is NO TRANSACTION SCHEME the client can use to arbitrarily define new TYPES of transactions (new atomic sets of business operations) whenever it fancies. The last thing we need is another transaction scheme allowing clients to bring business logic into their own hands.

I realize the article is confusing in this respect. I have corrected the "oficial" version of the article to make this clear.



When you make the first of a series of changes to objects in the database, you typically break one or more database invariants until you get the last change entered. Other processes looking at the database had better either wait, or had better see the state it had before you started.

Yes. In the prevalence scheme, the other processes shall wait.




To get much concurrency, you need to snapshot the state before the first change.

Hmmm. What if the waiting time for each transaction is only a few microseconds? (I shall explain...)




If you get halfway through a series of changes and crash, the system had better come back up without the changes you made, because you're not going to be equipped to continue where you left off.

Yes. The article already covers this well, though. Are there any doubts?




If you get halfway through a series of changes and discover some condition that keeps you from finishing, you had better be able to just drop the changes and pick up with the original snapshot.

"You" (the system server, I presume) will never be halfway through a series of changes and discover some condition that keeps "you" from finishing. (I shall explain...)




If N processes make a series of conflicting changes concurrently, (N-1) of them had better be told that their changes have failed, and that they must try again.

There are no concurrent changes in a prevalent scheme. All changes are sequenced.



There's a reason that databases are written by career professionals.

Yes. Databases are way too complex. ;)



A simple object database can be really useful, but that doesn't make it a substitute for the real thing. That's part of why so many "object database" companies failed some ten years back.

Prevalence is a persistence scheme, and, like OODBMSs, Prevayler will guarantee a logically crash-free object space for your business objects. Prevayler is not an object database manager, as I see it, though. It does not provide any sort of language for data storage or retrieval (ODBMSs normally provide some OQLish thing). Database managers are also worried, among other things, about how they will store chunks of data from RAM to disk and how they will retrieve those chunks later. When you have enough RAM for all your system data, you need no longer worry about that.

When you have enough RAM (the prevalence hypothesis) and a crash-free object space, many database career professionals' assumptions no longer hold.

Interesting but ... one has to free one's mind. New possibilities are waiting.
Re: MySQL Comparison, posted 26 Dec 2001 at 19:14 UTC by KlausWuestefeld »
(e.g. if you're happy with the robustness and performance of MySQL, you'll probably be happy with this, too)

Of course you will be happy! Prevayler is much more robust* and much faster** than MySQL. ;)

* Robustness, as I understand it, is related to failure. The less failures something presents, the more robust it is - as simple as that. Prevayler's robustness is bounded by the robustness of the VM and its serialization algorithm. Prevayler is so simple (564 lines including comments, javadoc and blank lines) you could probably write a formal proof for it. ** I have tried both but please don't take my word. Try them out too.

"Since Prevayler is also simpler to use, what is the advantage of MySQL?" Some people like SQL and the relational model. MySQL is a relational database manager with an SQL interface. Prevayler is not.
Re: Java Flushing to Disk, posted 26 Dec 2001 at 19:17 UTC by KlausWuestefeld »
Klaus, as a matter of interest, how did you manage to get Java to force flushing to disk?

FileOutputStream.getFD().sync()
Re: Interesting but problematic, posted 26 Dec 2001 at 19:24 UTC by KlausWuestefeld »
Thank you (Klaus Wuestefeld) for your nice write-up.

You are welcome.



Let me discuss/ask some further points: distributed systems? if a systen-prevalence-deployed application contacts other services resp. other servers you have a synchronization problem. How do you handle that? I guess, that you end up doing a 2PC- like synchronization between your prevalence servers.

I didn't understand the question very well.



Fine grained tx-model versus all or nothing? with big BOs - systems there are actually lots of small transactions. the system prevalence paradigma doesn't give you a fine grained application side control, or does it?

No it doesn't. I believe that to be inefficient and unnecessary. Maybe we could discuss an example where you think it might be necessary.



Note though that you can adapt (extended) 2PC-transactions to efficiently work RAM-based while retaining persistent storage properties (by using RAM-based subtransactions and files or RDBMS in the root transaction).

Yes. I know. Three years ago, I wrote an object-relational persistence layer for Java that had nested transactions in RAM and an optional* RDBMS in the root transaction.
* You could run everything in RAM if you wanted. That was good for presentations, developing without database configuration hassle and running test scripts very fast.



Scalabity? having all "commands queued and routed through a single place" doesn't scale very well. We should better consider one of these big 64 processor multigigabyte machines using a gigabit card: you wouldn't want all requests to be serialized through a single bottleneck which involves IO.

Make sure you let the people using ORACLE (and its redo log files) know about that. ;)



With fine grained distributed transactions you don't need this "single place" or even a single server.

Sounds interesting. Could you elaborate and give an example?



I appreciate the "do it in background" approach, though, as an advance to requiring requests to be queued while saving the state. It's quite neccesary for 24/7 systems.

Was it clear to you, from the article, that your prevalent system DOES NOT have to stop in order to save its state?
Re: testing, debugging, integration, and data migration, posted 26 Dec 2001 at 19:31 UTC by KlausWuestefeld »
I used to be a professional SmallTalk programmer, ...

Me too, for 5 years. :)

The simplicy of "just saving the system state" is a double-edged sword. The downside is that it is often hard to specify a particular system state that you might want to use for testing or debugging. If you ever get an object into a "bad state", it can be very hard to find out how it got into that state.

In the prevalent scheme, with some daily system snapshots, you can retrieve the system's state before it "got bad"; and with the command logs you can actually replay your commands one-by-one until you get to the rotten one. Of course, I am supposing you have a decent "object encapsulation breaker" FOR DEBUGGING PURPOSES ONLY.

I know there aren't many of those around (compared to SQL-based tools) but that is more of a cultural problem, I believe. As you say, people are used to rows and columns. They like to break their systems' encapsulation with SQL tools and, at the same time, they like to complain: "Where are all the benefits object orientation has promised us?". ;)

What can you do? I expect things like Prevayler to gradually break this vicious circle.

Lastly, upgrades were always a pain in image-based tools. Very incremental changes (like adding an instance variable to a class) can be handled by the serialization system. Any reoganization beyond that would require custom coding. In contrast, you can do small and mid- sized reorganizations a lot easier in SQL.

Me and my team would always do our migrations in Smalltalk (I wrote an object-relational persistence layer for Smalltalk 6 years ago). We would only use SQL or PL as a last resort and for performance reasons. With all your objects in RAM, that is a different story... ;)
Re: Why bother with disk at all?, posted 26 Dec 2001 at 19:33 UTC by KlausWuestefeld »
You can certainly go for RAM all the way and have several replicas, if you can afford it.

I could not agree more with egnor.

Just a comment on the "Copy-on-write VM tricks" to "soften the need to entirely freeze a replica during checkpointing.": It is a bit complicated dealing with executing threads because your memory might never be in a consistent state at any given moment in time. The orthogonal persistence guys (like the guys mentioned in the article) have not figured how to solve this problem.

With prevalence, the problem simply doesn't exist.
Re: XML Serialization, posted 26 Dec 2001 at 19:34 UTC by KlausWuestefeld »
There is a colleague of mine fiddling with several XML-serialization libraries because he wants to include that in Prevayler.
Re: Processing speed doesn't increase consistency, posted 26 Dec 2001 at 19:37 UTC by KlausWuestefeld »
The point about speed is that, if every transaction is extremely fast, you do not have to handle concurrent transactions. That makes life MUCH easier. I am not only talking about sheer RAM processing speed increase, mind you. I am talking about a design change. I shall explain it in one of the following comments.

The ACID properties do remain.
Re: Trademarks, posted 26 Dec 2001 at 19:38 UTC by KlausWuestefeld »
"PREVAYLER" and "OPEN-SOURCE PREVALENCE LAYER" are trademarks of Klaus Wuestefeld in the same way that "Linux" is a trademark of Linus Torvalds.

They are not REGISTERED trademarks though. Much like a copyright, you do not have to register it to be entitled to a trademark.

Of course, the suits will always tell you that it is better to register.
Serialization Throughput Test, posted 26 Dec 2001 at 19:47 UTC by KlausWuestefeld »
How fast does serialization run on your machine?

import java.io.*;

public class SerializationThroughput {

static public void main(String[] args) {
try {

FileOutputStream fos = new FileOutputStream(new File ("tmp.tmp"));
ObjectOutputStream oos = new ObjectOutputStream(fos);

Thread.sleep(5000); //Wait for any disk activity to stop.
long t0 = System.currentTimeMillis();

int max = 10000;
int i = 0;
while (i++ < max) {
oos.writeObject(new Integer(i));
oos.reset();
oos.flush();
fos.getFD().sync(); //Forces flushing to disk. :)
}
System.out.println("This machine can serialize " + max * 1000 / (System.currentTimeMillis() - t0) + " Integers per second.");
} catch (Exception e) {
e.printStackTrace();
}
}
}

My 450MHz K6II running windows98 with a 3 year old IDE hard drive gives me the following result: "This machine can serialize 576 Integers per second."

Does anyone give me more? :)
PREVALENCE IN THE LARGE, posted 26 Dec 2001 at 20:11 UTC by KlausWuestefeld »
OK, here we go:

I shall leave automatic load-balancing aside for now and concentrate on the concerns we already have.

Atomicity and Crash-Recovery
This is already covered in the article.

Consistency and Error-Conditions
Every command is executed on its own. The business system must either check for inconsistencies before it starts executing any command or be able to undo whatever changes were done if it runs into an inconsistency. In my designs I prefer the first approach. The demo application included with Prevayler has good examples.

Isolation
While a client is preparing a command to be executed, no other client can see what that command is all about.

Durability
The snapshots and command logs guarantee your persistence. If you use replicas, as described in the article, your system shall not only persist, it shall prevail.

Scalability and Performance
Suppose we have a multi-threaded system in which all threads do all of the three following things:

1) Client stuff - Waiting for an HTTP request; Waiting for an RMI request; Reading a file; Preparing a command to be executed; Writing a file; Generating HTML; Painting a GUI screen; etc...

2) Prevayler stuff - Logging a command to a file. (This is the only thing Prevayler does on the hot system during execution. The snapshot is taken by the replica and has no impact here.)

3) Business stuff - Processing a command; Evaluating a query.

For simplicity, Prevayler's implementation, today, will synchronize "Logging a command" and "Processing a command" in a single go. That is not necessary though. The only conditions we have to meet are:
- All commands are logged.
- All commands are executed after they are logged.
- All commands are executed in the same order as they are logged.

Using two producer-consumer queues would already alleviate that a little. The main problems, though, are still:
- It might take a long time to serialize certain large commands and Prevayler doesn't serialize and log more than one command at a time.
- The business system cannot process more than one command at a time.

The first problem is easy to solve. 4096 (or more) "slave" log files could be used to serialize and log up to 4096 (or more) SIMULTANEOUS COMMANDS. There must only be a "master" log file indicating in which "slave" log file the last command was serialized (it is not even necessary that the first command that started being logged be the first one to finish). In terms of scalability and throughput, this is as much as you can get even in an RDBMS like ORACLE because of its redo log files.

Take a look at the "Serialization Throughput Test" above, to see how well your machine would do as a "master logger". :)

All these performance enhancements are already scheduled for future Prevayler releases. If anyone is considering using Prevayler on a project for a system that actually needs them already, I will be glad to implement them sooner (or integrate someone else's implementation) and help out on the project design.

All other thread activities, including query evaluation, mind you, can already be processed in parallel. So, you can have as many processors as your VM, OS and hardware will support.

On to the second problem: "The business system cannot process more than one command at a time.".
To overcome that, then, we will establish a simple rule: "The business system cannot take more than a few MICROSECONDS to run any single command."

"Oh no! I knew it! This guy is crazy!", some might think, "How can I possibly process 100000 payment records in only a few microseconds?".

For 99% of your commands, like changing a person's name, you check for inconsistencies (invalid name, duplicate name, etc), and then you just execute it normally. With your objects in RAM, that will only take a few microseconds anyway.

For 1% of your commands (the hairy ones), like processing a batch payment with 100000 payments, lazy evaluation is the key: your system simply doesn't process the command. Instead, it just keeps the command in the "batch payments" list for future evaluation.

The command will be processed bit-by-bit whenever a query is evaluated regarding that command. It is important to note that, while the client is building the command, the command is internally preparing its structure to be kept in the system without further processing. Remeber: a prevalent command is much more than an atomic set of operations. It is a full-fledged object and can be responsible for much of the system's business intelligence! The batch payment command, for example, would keep all payment records internally in a HashMap with contract id as the key.

Suppose you then query the payment status of any given contract. The contract will see "When was the last time I updated my payment status?". It will then look at the "batch payments" list (there are two or three batch payments a month): "Were there any batch payments since my last update?". If there were, the contract updates itself accordingly (one HashMap lookup per batch). Then, the contract simply returns its payment status. This all takes only a few microsecond too.

You could have a query, though, that actually depends on the processing of ALL the payments (e.g. "Total Monthly Revenue"). In this case, the query AND ONLY THIS QUERY will take about 2 seconds* to execute. All the rest of the system continues working at full speed and with full availability.

*Today, my company has an ORACLE based billing system running on big solaris boxes that takes 62.5 machine hours to process 100000 payment records. We estimate that doing it all in RAM would take no more than 2 seconds (on my desktop machine, mind you).

Are there any more doubts or are all your systems already prevalent? ;)
Re: Trademarks, posted 26 Dec 2001 at 20:14 UTC by dalke »
They are not REGISTERED trademarks though. Much like a copyright, you do not have to register it to be entitled to a trademark.

Ahh, thank you. The USPTO link for that is: http://www.uspto.gov/web/offices/tac/tmfaq.htm#Basic001.

Do I need to register my trademark? No..

Also, What are the benefits of federal trademark registration?
Constructive notice nationwide of the trademark owner's claim.
Evidence of ownership of the trademark.
Jurisdiction of federal courts may be invoked.
Registration can be used as a basis for obtaining registration in foreign countries.
Registration may be filed with U.S. Customs Service to prevent importation of infringing foreign goods.


"PREVAYLER" and "OPEN-SOURCE PREVALENCE LAYER" are trademarks of Klaus Wuestefeld in the same way that "Linux" is a trademark of Linus Torvalds.


Umm, except that Linus owns the registered trademark on Linux, serial number 74560867 at uspto.gov. There was a big hoorah about this some five years ago when someone other than Linus registered the term for himself. Some of the links about the topic are mentioned at http://www.linux10.org/history/ .


Of course, the suits will always tell you that it is better to register.

Most "suits" would say that if you have the $325/10 years and don't want to go through the hassle of defending your mark if your work becomes popular, then it's worth it.
Thoughts about Prevayler, posted 27 Dec 2001 at 02:39 UTC by Gandhi »
I'm using prevayler at a beta system I'm developing, and I think the main problem when you expose this kind of system is that you don't have studies saying it's right or not.
Of course a lot of people thought about this before Klaus, but anyone really made a serious study about what are the more commom actions (procedures) perfomed for each category of application?.
What is the best application category for prevayler?.
Anybody knows what is the REAL consystency of the systems at the market?.
Don't you think inconsystency at 99% of the cases are just result of bad code at the top layer? Can't we just make a fault-tolerant system and keep the system working, no matter how bad coder is the guy?.
New java implementation (1.3 and 1.4) has news classes that allows high speed messaging pipes between applications. Can you imagine a better use to these pipes?.
I agree that XML serialization is a good thing, mainly for debugging purposes and it's atomicity, but how can you compress it? And if you compress, why keep it as XML?.
I think that just a better serialization scheme should do the trick, with compression, cryptography, and a hierarquical system that could allow easily XML translation. Externalize methods do the job. Any volunteer?.

One easy question. Is it a framework? Is there a planned plugin structure? Everything will be done through interfaces? No register classes or similar approaches?.

[]s, gandhi.
Prevayler Plug-ins, posted 29 Dec 2001 at 04:06 UTC by KlausWuestefeld »
One easy question. Is it a framework?
Not at present.

Is there a planned plugin structure?
No. Can there be a plugin structure in the future? Yes.

There is no design trait in Prevayler based on predictions for the future. Prevayler's design, at any point in time, will be the simplest design that we can achieve and that satisfies all CURRENT requirements. The goal is anticlimactic simplicity.

Don't worry. Thanks to simplicity, the day you write the first plug-in for Prevayler, we will easily find a way to "plug it in". The day you write your third Prevayler plug-in, there will certainly be a "plug-in structure" in place.

That is the beauty of open-source and that is the beauty of simple design.
To Be Continued..., posted 29 Dec 2001 at 04:16 UTC by KlausWuestefeld »
Anyone interested in knowing more about prevalence or in further discussing the subject (but not necessarily having Advogato certification) take a look at the Prevayler Forum.

See you there, Klaus.
orthogonal persistense, posted 29 Dec 2001 at 15:44 UTC by jerry »
Askemos has a simillar take on persistense. Just not "all in memory" but "allways saved to file" - after each transaction in any of your objects.
Serialization Throughput for Larger Objects, posted 30 Dec 2001 at 18:08 UTC by Ward »
I generalized the throughput test to write records of various size. For small records the time is dominated by the flush; for large ones, transfer time. I found the knee of this classic curve to be at about 300 Integers (3k bytes) on a Windows platform and 100 Integers on a Linux. All but one machine I tested showed other behaviour that I cannot explain. I've written a short note with graphs and the revised test source code.
Fine, what about Garbage Collection?, posted 3 Jan 2002 at 00:10 UTC by jonabbey »

I designed and implemented a RAM-based, transactional database in Java years ago for Ganymede, and I can attest that keeping everything in memory works splendidly. Add a transaction log for recovery, and you're cooking with gas.


At least, that is, for reasonably small datasets. The big open question for Ganymede, and for any memory-resident Java database systems, is how big a cost does Garbage Collection become when you scale up? Using the operating system's native VM subsystem to handle disk paging works fine, but when the Garbage Collector has to sweep through everything periodically in order to clean up garbage, that sweep has presumably to do a good bit of paging to take care of things.


Do you have any insight into how serious a problem this is? Ganymede works fantastically well for us at the scale we need it to, but I've always imagined (but not tested) that putting a gigabyte of directory data into it would probably not work so terribly well.
Re: Garbage Collection (Raising the Bar), posted 3 Jan 2002 at 01:34 UTC by KlausWuestefeld »
I ran a few tests creating huge arrays of Integers and serializing them to stress the limits of some VMs. Everytime we increased the size of the array to a point where the system started paging, we simply had to abort the test after a few hours because we couldn't stand waiting any longer. 55 million was the max we reached without paging, running on an HP-UX machine (Thanks to the guys at HP/PortoAlegre/Brazil).

The prevalence hypothesis, though, is that you have enough RAM for all your data so, even when the garbage collector kicks in, your system shouldn't have to page to disk.

Even if you have enough RAM, the garbage collector can be a nuisance in many large systems and a real show-stopper for time-sensitive critical systems. I am not an expert but it seems that most VMs use a mix of generational garbage collection and traditional mark-and-sweep. I really would like to see some three-colouring going on anytime soon (if you know of anything about this please post here).

A very popular VM's heap size won't even reach 1GB. (It will allow you to set the parameter but will shamelessly ignore it if it is above a certain limit). It seems that VMs like that one are targeted only at feeble client code.

I believe that projects using Prevayler will actually raise the bar for VM robustness, heap size and garbage collection performance.