2008-03-14

Toward a more rational public education system

I have been a student far more than most people: it took me 29 years to get through school from first grade to the PhD (with various twists, turns, and interruptions along the way). I also spent ten years teaching music privately, and later, several years homeschooling my two daughters. I often have randomly philosophized on the question of education, and over the years have developed a few ideas and opinions on the subject.

The first main difference between my ideas and the conventional view is over who controls the pace of learning. In the conventional approach, the teacher and school district/department/college control the pace, and the student must keep up, along with all of the other students in his class. Those who work more slowly are lost, and those who work more quickly are bored. I think that a critically important component of a rational educational system is that the student work at his own pace through a structured curriculum.

The second big difference between my approach and the conventional one has to do with what it means to "pass" a course or a grade. The conventional approach is to set a certain average level of performance--generally what is called a "C" grade--and to pass those with C or better averages and fail those with less than C averages. Note that a C average can be attained either with Cs in all subtopics, or with an A in half and an F in the other half of the subtopics. That is, a C average means that there are certain subtopics that have not been mastered; yet, the student must advance to the next level and do work that presupposes this mastery. As one moves through 12 grades, there is an accumulation of nonmastery such that students who graduate highschool with an overall C average will have mastered, on average, only half of the subtopics in the courses that they passed. I think that a second critically important component of a rational educational system is that students must master each and every subtopic that they study, in a structured curriculum. By "master", I mean that there should be no misunderstanding, basically performance at the "A" or "A+" level.

Finally, a third significant difference between my ideas and conventional education is that all students must end up at the same point. That is, there is something called a highschool diploma, or a bachelor's degree, that all students must achieve within a certain period of time. Those who don't, fail; those who do, pass. My idea is that, given a properly structured curriculum through which students pass at their own pace, but in which each and every subtopic must be mastered before advancing, the proper outcome measure is not a single diploma/no diploma, but rather, an index that represents just how far they have progressed--with complete mastery, remember--through the structured curriculum, at any given point in time.

The structure of the curriculum is extremely important, however, there is already wide general agreement, at least for the core subject areas, as to this structure. There is no reason why extra, less-structured, non-core subjects can't be incorporated into the core curriculum as such, as long as the well-structured core is available. For example, if performance ability on the violin is not part of the core (and why should it be?), there is still no reason why a student should not add a violin performance component to his individual curriculum.

So there should be a national core curriculum broken down into a network of interrelated subtopics such that the dependencies are encoded into the curriculum in the form of prerequisites. When a student has mastered all prerequisites, then he advances to the next set of obligatory and optional subtopics, in an ongoing process.

This means that students will work much more independently than in a conventional classroom. There are two relevant precedents for this. The first is so-called "open education" which was popular in the 1970s (and in which my elder daughter participated for two years). The second is the style of "unschooling" used in many homeschooling families. In both cases, chaos can result in the absence of knowledgable, well-trained teachers or parents, and the training must include how to let students work as independently as possible, as well as how to convey the information in the curriculum. The approach is also found in Montessori schools, whose emphasis on properly prepared manipulables and other structured materials is an excellent example of how to carry out this approach.

It is true that under a fully-implemented version of this approach, some students would make their way very quickly through most of the curriculum, "graduating" while still elementary school age, while others, even after 12 years, will still not be at what is currently known as the "high school level". Is this a bad thing? I would argue that it is much bettern to master each aspect of basic skills than never to do so, but by occupying a seat, to receive credit for "passing" more advanced ones.

The largest problems in using this approach are (1) to create the curriculum along with all supporting materials, and (2) to train (or de-train) teachers and parents in the method so that they strike the right balance of support for the students.

Greg Shenaut

2008-02-18

Term Limits and Lame Ducks

The spector of the lame duck president, or of a lame duck congress, is a familiar one on the American political landscape. For many years there has been a tension between those who want professional, experienced government leaders versus those who would use term limits to allow new blood to enliven government. However, the emphasis by all parties has been on elections and on the beginnings of terms of office; relatively little attention has been paid to the problems of the lame duck.
It seems to me that the problem of the lame duck is at least as serious for the country as the problem of entrenched incumbency. It is well known that the effectiveness of government is greatly diminished during the last year of a president's second term (or the first term if the incombent does not seek re-election), and also of a congress that has seen a shift in the majority in the elections but must still finish out the current term. Officials either don't do much at all, or they do things that are contrary to the current mood of the nation.
An equally serious end-of-term problem is that of re-election. All too often a great deal of an office-holder's energy during the latter portion of a term is focused on things like fundraising, speechmaking, and pandering, all focused on getting re-elected to office.
I think that both incumbency and end-of-term problems can be addressed by making a few simple changes in the structure of our government, including but not limited to extreme term limits. (This is another in the series of random philosophizations regarding the need to replace our existing constitution through a full-bore constitutional convention.)
Well, for one thing, all elected offices should be limited to a single term, and the lengths of those terms should be increased somewhat. For example, four years for representatives, six years for presidents, and 10 years for senators. Federal elections would be held every two years. Note that during each election, 1/2 of all representatives and 1/5 of all senators would be up for elections, and the presidential election would be held every third cycle. To my way of thinking, this scheme would provide much more stability in government, since at least half of each body would remain in office (1/2 for the house, 4/5 for the senate) each cycle. (Note that the terms are all prime numbers multiplied by 2.)
The second change would be to limit the term of each office to one term. That is, to four years maximum for representatives, six years for presidents, and 10 years for senators. The concept of re-election to an office would become obsolete. Every election cycle would bring in new blood: 1/2 of the House, 1/5 of the Senate, and 100% of the presidency. Note that the increases in the lengths of the terms proposed above is a counterbalance for the rather extreme single-term limit. There would never be a complete shake-up in Congress. There could still be a system of seniority, but only to the extent that in the House, the representatives in the second half of their term would be senior, and the ones just coming in would be junior; the same situation would obtain in the Senate, but there would be five levels of seniority instead of two.
Furthermore, this term limitation would not be only on re-election to the same office currently held, but would also apply to any elective office. That is, someone who is currently serving in a federal office would not be eligible for any elective office for the term immediately following the current term. This would reduce the problems we have seen with fundraising and electioneering during the latter portion of most elected officials' terms.
However, there is no reason why someone who has been out of government entirely for at least one government election cycle could not run for election to another office. That is, one could see a four year term in the House, two years out of office, and then a ten year term in the Senate, or perhaps a six year term in the White House. However, no matter how long out of office, once an individual has served in the House, they would no longer be eligible to run for a seat in the House. This should even apply to those appointed to fill vacancies: once the term to which they were appointed is up, they would become ineligible in the same way as if they had served a full term. The reason for this is to simplify the seniority system and to prevent end-of-term pandering.
Problems: one problem with this scheme is that the terms of House members no longer divides into the ten-year census cycle. However, there is always a delay in implementing new apportionment after a census; under the proposed system, there would simply be a more gradual application of changes due to each successive census. I have written elsewhere in the blog about my concerns regarding how we have implemented our House of Representatives and Electoral College; for example, an universal at-large election of representatives whose votes in the House are weighted either by the number of constituents they represent or by the actual number of votes they received in the general election would make the census question less problematic. However, the fact remains that because of the way that representatives overlap one another in this scheme, there would never be a clean break between one system of apportionment and the next, however, given that re-apportionments that change the numbers of representatives would only occur at the time of an election, there is a fairly simple set of procedures to deal with this fairly.
When a re-apportionment occurs, there are three possibilities. First, the number of representatives could remain the same for a given state. In this case, the boundaries could be redrawn and the new districts assigned to continuing representatives as well as to those up for re-election. Second, the number of representatives would be reduced. In this case, the reduction would occur only when representatives' terms end; at that point, the number of candidates would be reduced. In the interim, any extra continuing representatives would be considered to be "at large" representatives, that is, representing the state as a while rather than their old (non-existent) districts. Third, the number of representatives would be increased for a given state. In this case, continuing representatives' districts would be redrawn and re-assigned as needed, and for the election, there would be more open seats. Since no representative would be running for re-election, this modified system for implementing reapportionment should cause minimal disruption.
A second class of problems has to do with incumbents who campaign for their "favorite" replacement. This system does nothing to help with that, nor should it. Politicians would still be politicians. However, when we observe campaign activities under the current system, we notice two things: (1) people campaign much harder for themselves than they do for others, and (2) we cut people much more breaks in terms of missing votes, being out of Washington, and so on, when they are campaigning for themselves than for when the are campaigning for someone else. Therefore, while this activity will still go on, it will be reduced, and it will no longer really be an end-of-term phenomenon (because people will also campaign for members of their party when their term is not ending).
A third class of problems has to do with incentives. Maybe the above changes would simply make all of our elected officials lame ducks. Without any incentive to get re-elected, this line of argumentation goes, what would force our elected officials to do their jobs honestly and sincerely? Well, there are several responses to this. First, I simply happen to believe that the problems surrounding the ends of terms are much greater when the official can be re-elected and is working for that. If all officials were, in effect, lame ducks, the entire dynamic would be changed. People would enter office knowing full well that their time in Washington is limited. Yes, some might treat their elected position as a sinecure: ethics enforcement would be at least as important under this scheme as it is under our current one. However, it would also become much easier for our officials to follow their conscience. Even in the last session of a term, every official would be fully aware that they could not run for elective office for at least two years, which is more than ample time for the fallout from an unpopular vote to dissipate. But this is definitely a balance that deserves full public discussion.
A fourth class of problems is related to the previous class: accountability. Currently, the system is supposed to eliminate an official who doesn't follow the desires of constituents, by electing someone else. As a result, relatively small groups of people in congressional districts often can have a disproportionate effect on national policy and laws, and members of congress abuse such institutions as the legislative earmark. This proposal will, in effect, change the balance, especially in the House, between small groups of constituents and larger national issues. However, it will also make the House somewhat less responsive to the people. Once again, this is a balance that would need to be discussed in detail.
All of the above should be discussed in a nationwide constitutional convention, in my opinion. There is no chance that our current Congress would ever pass such a sweeping change.

2008-02-08

Why George Romney's Defeat is Good for Atheism

• No religious Test shall ever be required as a Qualification to any Office or public Trust under the United States —Article VI, US Constitution

As an atheist, I have long been aware that the American political structure discriminates against atheists. For example, there have been many polls in which a majority of participants say that they would never, or would be unlikely to, vote for an atheist for high office. The way I've always encoded this bigotry is that only monotheists are allowed to pass the constitutionally nonexistant religious test required to qualify for high office. Romney himself as governor of Massachusetts, along with various senators and representatives who are also Mormons, supported that view, as did the recent election of a Black Muslim into the House of Representatives. I always figured that the divide was between atheists and polytheists on one side, and monotheists on the other.

However, George Romney was defeated in his run for the Republican presidential candidacy because he is a Mormon. This really isn't very ambiguous: the Republican base is packed with religious conservatives who are basically on record that they will never vote for a Mormon, and in state after state, it was shown that this was no empty threat, especially since the religious conservatives could support the nonviable Mike Huckabee with their votes instead. The difference between the presidential campaign and other, lower campaigns, is simply that: the president is the truest test of American prejudices. Various individuals who are not members of mainstream-to-conservative Christian denominations can be elected to lower offices, basically as exceptions or due to the nature of the local consistency or simply as a fluke, but the likelihood of that diminishes to near zero for the office of President of the United States.

Therefore, it appears that the split is not between monotheists and everyone else after all. So what is the nature of the religious test for office and public trust in the Land of the Free?

I think that the test is actually based on fear of being attacked, as are several other important aspects of the US political landscape (the "War on Terror", the Border Fence, the fear of socialism). In this case, religious individuals view atheists and Mormons as a threat because they understand that their ranks are filled with former main-stream Christians who either have become atheists, agnostics, or non-participants in religion, as well as Mormons (and to a much lesser extent, Muslims). That is, the exclusion of certain religious categories is very similar to the kind of discrimination formerly seen among GM workers against Fords and vice versa, or among American autoworkers and foreign cars, or among supporters of various athletic teams. In short, it is a "branding" phenomenon, a defense against competing brands. And why not? At times it appears that our entire culture is based on advertising and marketing. Entire segments of our economy are "ad-based", that is, they make their living by enticing consumers to view or listen to advertising. It should come as no surprise that religions in America have adopted the same kind of advertising/marketing mindset, and that they demand brand loyalty from their adherents. (One might even speculate about the historical connection between religious brand warfare and consumer brand competition: which came first?)

As a practical matter, atheists, Mormons, and Muslims, along with Hindus and most other non mainstream-to-conservative Christians, still fail and will continue to fail the nonexistant religious test for high office in our land. But it is actually comforting to see that the test is not actually based on religious grounds at all, but on brand loyalty. Who know, maybe this insight could show a way to move beyond our current religious divisiveness and pettiness. For example, is there a secular brand (American?) that could actually transcend traditional religious and ethnic branding?

2007-12-07

The Senate After 2008

Let's assume that there is a tremendous victory in 2008 for the Democrats. That is, a Democratic president and a Democratic majority in both houses of Congress. This sounds great, but what does it actually mean for the country?

The prognostications that I've seen do predict a Democratic victory, but without much change in the House, which is already in Democratic hands, and with at most four new Democratic seats in the Senate. It is this last prediction that bodes ill for the nation.

In the Senate, any senator of either party can require a 60% vote for clôture before ending debate on a bill. Since (obviously) debate must end before the bill can be voted on, this means that any senator can delay any bill in this way. If it were a matter of a traditional filibuster where a small minority of senators decided to hold up the vote, the 60% clôture procedure would be beneficial. However, what we have seen in recent years is the emergence of a new habit whereby the minority party, as a block, uses the filibuster/clôture procedure to prevent action by the Senate. That is, the minority party, even though it would lose in a straight up-and-down vote, can prevent bills from becoming law whenever they want to.

During the current term, this problem is not as great as it may become after the 2008 elections. In this term, the Republican minority in the Senate can, and does, block most legislation. The only way past them is by pandering to them, compromising strong Democratic programs to the point where (1) they may no longer fulfill the purpose for which they were intended, and (2) they may become so distasteful to House Democrats (who do not have the filibuster/clôture process), that it no longer can pass the House. However, even when the bill does pass both houses of Congress, unless it is truly bipartisan (and therefore usually weak), the President will veto it, so 'twas all for naught.

In the post-2008 world, however (assuming a Democratic sweep), the filibuster/clôture problem will become acute. Even if the Democrats pick up all four seats, there would be 53 Democrats, 2 Democrat-leaning independents, and 45 Republicans. This is nowhere close to the 60 votes required by the filibuster/clôture process, even on bills that Lieberman supports.

Therefore, the public will have spoken overwhelmingly that they want a Democratic government, that they support the Democratic program. Yet, when it comes to passing legislation (and also, certain Executive Branch appointments), the minority party in the Senate will be in position to derail that program.

The problem is exacerbated by the relative unruliness of the Democrats. They are less likely to vote en bloc, and this already weakens the Democratic majority as a cohesive force. However, it is the Senate's filibuster/clôture procedure that will create the largest problem.

This state of affairs has several consequences. For one thing, we should be asking our Democratic presidential candidates how they will deal with this issue. There are basically two ways to do it: (1) compromise with the minority party, or (2) hold the minority party up to public shame for obstructing the will of the people. I think that both methods could work, depending on the goals of the moment, but I confess that the "public shame" approach does appeal to me. However, for it to work, there must be unity among the majorities of both houses and the Whitehouse. That is, the onus of explaining why the Senate minority is working to prevent the implementation of the will of the people must be put squarely on them. Furthermore, the majority must be damn sure they've dotted all the i's and crossed all the t's. Making a big push like this for poorly written, pork-filled legislation would backfire tremendously.

Another consequence is that the voters will probably not get what they want. There probably will be gridlock once again in Congress, and, if past history applies, the Democrats will be blamed for it. This is something that must be discussed up front, during the election debate. The voters must understand the dynamics of the situation, and the candidates must address the issue of how their program will fare as a result. This will probably dampen the enthusiasm of voters, but at least they will know what they are voting for.

2007-11-29

Saving space

On my Macintosh, there are two sources of disk bloat: international language support, and universal binary support. When an application is distributed, there is a resource folder that contains one or more files for each language supported by the application. For example, iChat supports Dutch, English, French, German, Italian, Japanese, Spanish, Danish, Finnish, Korean, Norwegian, Portuguese, Swedish, and two variants of Chinese.  That is, of the 18.5 MB taken up for language support, over 90% is for languages other than English. This phenomenon is true of most Macintosh applications, although not all support as many languages as iChat. The sum total of language files in my /Applications folder alone is currently 2.5 GB, up to 90% or more of which is for languages other than English.

Another source of bloat is the universal binary. This is a method, used for executables and object libraries, of packing several different versions of a compiled program into a single file. For example, in the Macintosh world, there can be Power PC, 32-bit i386, and 64-bit i386 versions of all compiled programs and libraries. In my /Applications folder,  452 MB are used for executables, with another 82 MB in /usr/lib for dynamic libraries, so just over .5 GB for this purpose, with around half or more used for non-native CPU architectures.

Personally, this doesn't really bother me that much. I have 100 GB on my hard drive, and newer systems generally have much more than that out of the box. Expending 2.5 GB or so on international and universal CPU support isn't a problem. However, many users are annoyed by this state of affairs, and there have been many hacks proposed to delete foreign language support and to remove non-native CPU support. However, these hacks can mess up the software maintenance process in various ways, ande, a software update can undo the effects of the hack.

In any case, I think that this issue deserves to be taken seriously at the system design level. In my view, a decent compromise would be to allow users to enable auto-compression of the less-frequently used components of their system. What follows is a proposal for a system-level change that could accomplish this fairly easily.

The first and most important thing would be to build in expansion of individual compressed files and folders to the software libraries or frameworks at a low enough level so that the process would be transparent to most programs. In effect, there would be a bit right in the inode of a file or directory indicating that it is compressed. (There could optionally be some other bits indicating the compression mode.) That is, the user would see no difference between a compressed or uncompressed file system resource; the standard frameworks would invisibly expand compressed files. In addition, auto-compressing or uncompressing a file system object should not cause any of the time stamping on the object to change.

For reasons of efficiency, a few programs would work with the compressed objects directly; for example, utilities such as find(1), and GUI file system browsers such as the Macintosh Finder, shouldn't require that files or folders be expanded. That is, there should be support built in for shallow access of auto-compressed file system objects in their compressed state.

However, most programs would trigger auto-expansion of the file system objects they touch. If you open a file in a text editor or word processer, it would auto-expand. If you compile a source file, it would expand. If you execute an executable, it would expand.

Each file system object keeps track of how long it has been since it was last read or written. A system daemon needs to scan the file system at low priority in the background, and compress file system objects that have not been read or written recently, where "recently" can be defined programmatically. For example, the idle time required for an object could be a function of individual files or folders, of file types, of file ownership, the amount of free space on the disk, and so on.

Under this scheme, all applications and files would be installed in their compressed state, and even the system would have everything compressed initially. As the user began to use the system, things would expand. If a resource wasn't used for a while, then (if this functionality is enabled), it would be autocompressed by the daemon.

A related functionality would be targeted toward the elements of universal binaries. This system would compress those elements that have not been accessed recently, and uncompress them as needed. 

In effect, there would be a trade-off between disk space and execution time. If the compression is set too aggressively, you'll save lots of disk space, but your system will be spending a lot of time compressing and expanding files, and will be slow. However, a correct balance will buy you disk space but cost very little in time. For example, I do not read any of the non-Latin character set languages, so I would rarely access the .lproj folders associated with them; they would probably all stay compressed at all times. On the other hand, the English files would be accessed frequently and would rarely qualify for auto-compression.

The situation would be even simpler for universal binaries, since in almost all cases, only the native architecture would be used on a given machine. The exception would be a file server with clients of different architectures; in that case, several architectures would be expanded.

This change could be done in a fairly straightforward manner, I think. However, I must admit that I would probably not enable it on any of the Macintoshes that I own or administer. As I stated in the introduction, the amount of space used to support internationalization and universal CPU architectures is small as a percentage of modern disk space. If a given system was actually running out of disk space such that this overhead became critical, the correct solution, in my opionion, would simply be to upgrade the hard drive.