A different wavelength

Bookmark and Share

Working at Diamond I know quite a few science success stories. New protein structures are solved, telling us how viruses spread through the body, how certain cancer drugs might (and might not) work, what makes a virus jump species. I read about new materials being developed for the engineering industry, new types of solar cell, understanding how explosive materials work. Scientists here have studied dehydrated dolphins to understand how fossils form, how bacteria could clean up the environment and why some artificial hips need to be replaced.

All in all, science seems to be in pretty good nick.

Then my mum got cancer.

I'm good at finding information. It's what I do for a living. So I went and looked at statistics, to understand what the prognosis was. I found out as much as I could about the current knowledge of that type of cancer - I read PR material, charity websites, scientific papers. I couldn't help her get better, but I could understand what was going on.

In the meantime, mum was going to hospital, where she underwent a series of tests, a minor operation, and was put on a course of drugs, specific to her type of cancer. The tests gave a scientific diagnosis of what the cancer was. The drugs were scientifically tested to ensure that they worked, that the side effects were minimal, and that they gave my mum the best chance of recovery. Ten years ago, those drugs were not available - even if they had been discovered they hadn't gone through the battery of tests required to allow their use on humans. Ten years ago, my mum may well have died.

Interestingly, long term statistics on cancer survival rates are not precisely defined, for exactly this reason. The X% of those who have already survived ten years since they were treated didn't have access to the treatment available to people today. What percentage of those who died then would live today? There are trends that show people are living for longer after cancer. But there are still many questions left.

Why did my mum have such an aggressive form of cancer, usually only seen in much younger women? Why did she recover when others haven't? How can we know that she will not contract the disease again? Is there a way the treatment that led to her recovery be made less traumatic? Are my sister and I genetically susceptible? What can we do to reduce our chances of contracting the disease? How can we answer these questions without science?

For me, science is fascinating. It tells us about the world around us, where we came from, why thinks work, and how we can improve our world. Science is fun - I read about it because I want to, because I'm curious. Science (or at least the communication of it!) pays my bills. All these things are good, but I could do without them, although I'd be bored more often.

And then, science saved my mum.

Yes, there were doctors whose expertise allowed a quick diagnosis, yes the NHS paid for her treatment, yes, the nurses cared for her, but her survival depended on scientific research.
It won't save everyone, but the more we know, the better survival rates will be, the less unpleasant the treatment, the more options there will be for those who are diagnosed.

And that is why I think science is vital.

| No Comments | No TrackBacks
Bookmark and Share

Ramesh Jain starts by defining life as a set of events from which we gain experiences. Human history is that of communicating experiences, using s developing experience of tools, from language written language, printing press, telephone to the internet.

At the beginning of computing users were specialised and had to be trained, now with the internet we are all users. With the advent of mobile devices it has become experiential - helped by the fact that mobile devices have more "senses" - it can see, here, detect movement, more like our own senses, and this has led it to become a way of sharing events and experiences. His view is that the evolution of the web is all about events and now content (data) is king.

There is a lot of information out there about events, but the mainstream news only covers big events, plane crashed, international sports, but not news from our families, the events that are important to us. With social networking all this information is there, but we have to go to different sources - the BBC, Facebook etc, but not through one single portal. Different mechanisms capture different events.

Events are connectors - they create context, for people, things, place, time, experiences, photos and other events. Facebook and Twitter report on events at a rate of billions a month. But these are just statements - how can these "micro events" create situations? Cameras contain a lot of information - time, location as well as the image - they have become event recording devices. We are now seeing a photo explosion, 3 billion photos are uploaded to Facebook every month, yet a lot of people are taking photos who are not uploading them - what is happening to them?

How do we go from micro-events to situations? We can have billions of sensors which report short pieces of data as it happens and plot these micro-events, then use this to deduce macroscopic behavioural information - e.g. whether it is a weekend or work day. We can do something similar by plotting events into a "social image" where each pixel represents a micro-event and we can find hot spots. We can then have networks for specific topics, and you can select those of interest to you - a personalised web experience for everyone. Delivering this by mobile phone opens this up to the 2 billion "middle tech" users, perhaps without broadband internet access.

In a change to the published programme, Noshir Contractor spoke on developing web science to understand and enable 21st Century multi-dimensional networks. He begins showing the Social Networking in Fur (SNIF) and Lovegety - two devices which allow people to make new connections, but not actually socialise.

We are seeing an increase in team science - from a study of web of science, teams science is increasingly composed of authors in different locations, and these communities produce higher impact papers, and this is true for all fields and team sizes, and is independent of elite universities. However, these projects do often fail but when they succeed it is a major success.

Web science is well place to take a complete forward in facilitating collaboration in understanding how communities form, maintain and dissolve, in the capability (through the semantic web) to organise data on behalf of the community and the computational power (through the cloud) to manage the demands of petascale data. This lets us move on from the SNIF/Lovegety mechanism of just identifying connections to really connect and collaborate with others.

There are many theories on how we network, e.g. self-interest, social exchange, balance, homophily, cognition, so if we can generate structural signatures we can make a judgement on how connections happen in a specific network, which mechanisms are at work here.

The web provides rich data for social science studies, for example multiplayer games like World of Warcraft provide data on how people collaborate to take part in quests and win prizes - not unlike what happens in reality. Interesting findings included that people tend to play friends in real life, similar age and game experience, distance does matter, closer geographically better. People are using online games to connect better with real friends.

He demos an application (C- iknow) developed for the tobacco research community that can identify people in similar research areas - something that could be very useful for synchrotron research, identifying people in similar research areas. There is a massive area here where you can identify connections (friends of friends), with similar interest or with particular expertise. I would question how people would continue to keep this accurate and up-to-date. There is also difficulty (as raised in the Q&A) in evaluating this, first whether people actually follow up and contact people who are recommended, second that these collaborations actually generate good results.

Jonathan Zittrain has the compelling title "Will the web break?" by Videocon. Hmm, am having to concentrate too hard to blog this one in detail! Apple and the iphone are replicating the old Compuserve interface, allowing only Apple Approved programmes to run (through the Apps store). There are a lot of imposed restrictions - for example not being able to watch BBC iplayer outside the UK. Also bit.ly poses a threat as the dominance of these shortened links as a result of Twitter, all these will break if bit.ly dies. Facebook also challenges the notion of the WWW because of the subtleties resulting from privacy restrictions - the web page being available for an individual is not the same as being available on www. Also google shows different results depending on where you are, which can be shown by searching for "stormfront" on google.com compared to google.de. The web changes depending on where you are - it is not isotropic!

The final talk is Tim Berners-Lee, on Distributed Social Networking through Socially Aware Cloud data, covering linked open data, read-write data and internet issues and how to preserve the web. Linked Open data allows identification of objects that are similar - e.g. two references to Copenhagen, and is the new name for semantic web! Apparently this is taking off since it was renamed! In DBpedia we can see read-only Linked open data cloud with data from Wikipedia.

Someone on Twitter says "Listening to Tim speak is like listening to an Open University video on fast forward" and this is true - excitable, informative, but very hard to follow!

Read-write data was the basis of the original web - the first browsers allowed editing as well, but it changed to be just a publisher. With blogs, wikis etc we do have writing and editing tools but we need more. Also social networks are silo-ed - we can't share information between social networks. Facebook has very rich semantic data, for example highlighting faces and naming them, but even though these are your photos they are contained within Facebook and you have to login to access them. We need to be able to move between the two without having to fiddle with an API. Why have data standards and then not use them?!

We have to be cautious with some issues - we must maintain net neutrality, ensuring all sources are available (e.g. not slowing down access to a particular party website before an election). We must be careful about snooping - if someone access a site on STDs they must not be added to lists relating to STDs - privacy is important. Also people have a right not to be disconnected if they are seen to be downloading something a company asserts they should not have downloaded. Certainly this should only be done on presumption of innocence without due process of law.

The end - what a fascinating couple of days! The talks will appear as a special edition of the proceedings of the Royal Society early next year, and the slides should be available
in about three weeks.

| No Comments | No TrackBacks
Bookmark and Share

Pierre Levy, a philosopher from the University of Ottowa began the first session with "The Nature of Collective Intelligence." He begins by showing layers of symbolic tools, from orality, writing, alphabet, mass media to a digital medium. He looked at the concept of a Newtonian revolution in human sciences, fully exploiting digital media. The object of human and social sciences is the human mind, or the collective intelligence of the human race - the product of human symbolic cognition and communication. Newton's law of gravitation proposed the idea of gravity as something unique and infinite nature, as is the mind.

The mind is not material in nature. It does contain ideas, and connections between ideas, networks of ideas, and as others have said the future of human sciences will involve the application of graph theory to this network of ideas. He defines "idea" as having concepts - abstract classes and categories, percepts, images - sense data, and emotional affects.

We can try to represent ideas in computational form. The mind can be viewed as a consistent universe of operations on ideas in a structured semantic space. We can gather data, images etc electronically and categorise these. We cannot however represent the concept electronically, the meaning. We can represent truth in binary form but not much more. We cannot "see" the concept, it is always represented by signifiers (images, audio etc), and concepts are always networked, always linked to other concepts, like a giant grid of concepts, so a concept itself is a network. He outlines a semantric machine which takes signifiers, manipulation mechanisms, mechanisms to manipulate the signified and semantic circuits.

I'm starting to find this a bit confusing and desperately trying to recall discussions of signifiers and semantics from my philosophy of science course but it's all a bit hazy now!

So there are layers of digital medium - the computer links transformers, the internet connects computers, the web connects data and the semantic web will allow connections between concepts through a Uniform Semantic Locators using a new metalanguage IEML (Information Economy MetaLanguage).

Manuel Castells is a sociology professor and is here to discuss Social Networks in the internet: what social research knows about it. The internet has been around for a long time since Arpanet in the 1960s but has seen dramatic expansion with the www, the availability of landlines and will continue to expand with the availability of internet access from mobile communications. There are still inequalities in quality of access, but by 2014 mobile internet users will exceed desktop users. But communication is the defining feature of humanity, so transforming channels of communications transforms society.

We are now entering a networked society, but this does not mean the and of community instead moving to communities based on shared values, interests and projects. Mass media is shifting to mass self communication based on the internet but also combines offline communication. Network technologies are the medium for this new form of social organisation, which is global society as the networks themselves are global.

Mass media still reports bad news, and represents the rise of the internet for alienation and exclusion of society, but research shows that the more social individuals are the more they use the internet, and use it to strengthen relationships with family, friends and local community. The internet either has a neutral or positive impact on society in nearly all cultures, and rarely actually creates isolation.

The web is providing tools for individual autonomy - creative, political and social. Internet use empowered people though security influence and led to increased happiness, and this is true for groups that need empowering, for example women (?) who are at the heart of the family and social networks, according to a study by Michael Wilmott. This is circular - the more people use the web, the more autonomous they become, the more they use the web.

The deepest social transformation from the internet has come in the last decade with social networks. Social networks passed email use in June 2009 in both time and number of users. Interestingly when people find their needs are not being met by existing platforms they create a new one. The new social platforms are not just about conversation, but about actually doing things, taking action, creating content. There is a connection between the development of social networks and social life, but individuals are taking control of how these develop, not controlled by governments or corporations. The big sites cannot control how people interact - if they do someone will create a new site that does what they want and everyone will migrate there. If facebook tried to go nasty it will disappear, as AOL did, as seen when Facebook tried to charge and retracted this three days later as people went away.

Session 3 is on governance. The web is central to economic, social and political life, so where is the Government, Helen Margetts asks. Digital era government replaces new public management, with key themes of reintegration, needs-based holism and digitalisation instead of business focussed. The changes are driven by both technological developments and the incoming age of austerity.

The Big Society concept does allow for social media tools to come to the fore, single citizen accounts, citizen surveillance to replace audit systems, social web services within government. Digitization allows for quasi-voluntary compliance with DIY forms, government supers-sites, open data projects freeing public information are all good for governments looking at austerity measures. Delivery of all government information online can reduce costs elsewhere.
Move to digital era is happening slowly - for example at the DWP have ~140 million "customer contacts in total, only 340,000 are online, although in 2008 you can't apply for any benefits online. There are three possible scenarios

• Crisis, where government fragments
• Investment pause - government will fall even further behind private sector
• Expansion of digital era governance where the government "becomes" its presence on the web.

Other countries are implementing some social security schemes online, for example in Scandinavia some processes can be carried out online but not countries the size of the UK, where there is massive cultural resistance.

Luis von Ahn spoke on Augmented intelligence: the web and human computation, one of those responsible for developing Captcha, non-machine-readable text that humans can read. Spammers try and circumvent this in a range of ways to, for example to create new email accounts. There are captcha sweat shops where humans solve captchas, redirecting to porn sites so humans solve the captcha to access porn.

Now 200 million Captchas are typed every day, taking up 10 seconds of human time. Instead, can we use this 10 seconds for something useful? A solution (through Re-Captcha) is in digitising books. Optical Character Recognition (OCR) is not perfect, humans do a better job, so the words that OCR cannot recognise are now being used in captchas. This is now used by a lot of companies including Twitter and Facebook and up to 85 million captchas are now solved every day. Now 750 million people have solved a word through captcha - 10% of the global population.

He asks, "If we can use 100,000 people to get to the moon and build the pyramids, what can we do with 100 million?" He has a project to translate the most important pages on the web into the world's major languages. A project called Duolingo uses people who are learning foreign languages to act as translators. This can act not only on text but also subtitling videos and training speech recognisers, simultaneously helping the user in listening and speaking languages. This could enable, for example, wikipedia could be translated from English to Spanish in 80 hours with 1 million users.

In conclusion: we should stop being parasites on computers and allow them to use our brains for processing. Excellent talk from very charismatic speaker!

In the Q&A - there is an ethical question here, that people don't know they are translating / capturing for digitisation. The answer is to ensure that people know what is happening by making the information available next to the captcha, and by projects where the purpose is to help humanity.

| No Comments | No TrackBacks
Bookmark and Share

The afternoon session is all about Engineering the web, starting with Jianping Wu on Towards a next generation internet. He started with the stats - 2 billion people are now online (June 2010), although this is significantly skewed by geographic location with less that 10% of Africans online. The introduction of the world wide web in 1992 was the single most important event in web history, moving access to internet from just computer scientists and opening it to everyone.

Challenges to the current internet include scalability, making it available not just to computers but to all kinds of electronics devices, security and ensuring trust, high performance (especially transferring large amounts of data over distance without dropping out), applications in real time, mobile communications over the internet and how the whole thing is managed.

The evolution to next generation internet (NGI) keeps the "Internet DNA", the communications infrastructure and applications that have already been developed including things like Facebook, Twitter etc but that meets these challenges and is based on the IPv6. There is the possibility of "revolution" - a new architecture that provides a clean slate for development, or the better way is IPv6.

In China internet research began in 1994 as the internet entered China. In the 2nd phase there was development on IPv4 /v6 router and research looked at implementing the internet. In the third phase from 2004 onwards research looked at information architecture. However even now only 30% of Chinese have internet access and they hope to increase this to 70% in the next 10 - 20 years.

Killer apps for the future appear to be the internet of things, cloud computing and smart planets. There is a difference between web science and what is networking science, but both are new interdisciplinary fields.

Prof Dave Robertson began with the provocative title of Programming the Social Computer. Social computation is defined here as a computation for which a spec exists where the successful implementation depends on large scale, computer-mediated social interactions between people. Social properties are the requirements associated with the specification. A social computer is a computing system that allows people to initiate social computations in which they can adopt roles in computations initiated by others, ensuring in the meantime that social properties of viable computations are preserved.

An example specification would be trying to identify events that occur in a particular place in a reasonable time frame where this can't be identified technologically. The aim is to find the events as soon as they occur. So people are acting as sensor for a particular target event. The DARPA Network challenge released ten weather balloons across the US. The solution was to allow people to register online as "reporters" and recommend others to become reporters. These then logon as soon as they find the balloon, and there is a reward for finding them. Those who recommended the person that identified the target is also rewarded. In the event the balloons were found in about 10 hours.

Making this work is very challenging. Alternative social interactions can be determined depending on the peer group, and these can be examined to see how well they fit with the peer group.

There is a large unexplored space for social computation decentralised through society -problems that have small, direct impact locally that is magnified when replicated globally, but also problems with huge potential impact globally that needs a social infrastructure to harness the ingenuity of the human "sensors", but this is all speculative - see http://www.socialcomputer.eu/ for more.

"Towards a depersonalisation of the web" was the first topic of the final session today. Web content is now generated by millions of people - in effect, the web has turned social where users can connect and share information. We assume that the web has the answer to all questions, but search engines don't know how to leverage the information.
A solution is "personalised query expansion" where the information searched is restricted based on the profile of the user who is searching, so identifying people who have tried to solve the same problem and matching them through implicit social networks.

In practice this is very hard due to problems of scale, distribution, dynamicity - users can change their minds or their interests, and subjectivity - how does the computer know if how happy someone is with the result. The example used delicious tags to create profiles of users based on the items they have tagged and the tags used. However most people have multiple interests that do not necessarily overlap. Instead users can be matched against a set of other users as a whole rather than individual users.

How do you then identify and exploit relevant (similar) users? Decentralisation is the key. Using "gossip protocols" to identify "close" neighbours, who are similar to you based on short and long links to sample efficiently. There are issues here relating to privacy, but there was no time to discuss those... See www.gossple.fr. There are however huge issues relating to criminal use of information using this approach, but equally huge possibilities in online dating!

David Karger's alternative title was "Instant gratification with the semantic web". The basic premise is that structure makes information more usable and useful, by allowing rich visualisation and interpreting information.

There are several problems at the moment in that individuals can't communicate their ideas as effectively as large organisations. People can't find data because it's either not there or badly presented. Scientists don't provide enough information - a paper may just be a pictoral graph without the data. Also government information is not as available or as useful as it could be.

Things have improved, particularly professional websites which can take advantage of databases where the data is accessed by complex queries and displayed in flashy apps but "plain" users don't have access to these - all they can do is edit text. Alternatively sites like Flickr, epicurious, youtube, but then they have to rely on the organisational structure of the "content carrier" (e.g. Flickr) and if your area is obscure there is still no home for it. Mash-ups are one answer, but are still largely the preserve of programmers. How can this be changed?

People do already edit spreadsheets, using templates, different views (using lists. Thumbnails, details etc) and filtering/sorting information based on structure - they just don't normally do it on the web. There is Exhibit - a Publishing Framework for Data-Rich Interactive Web Pages apparently - which lets you view, aggregate, filter and sort data through a browser. This does seem like a CMS, and a tweet refers to it being very like Drupal.

For some time now content (html) has been separated from design (CSS), but he also advocates adding another layer for data. The aim is to be able to create rich data visualisations by anyone, not just programmers. He claims APIs are harmful for just this reason - it's only programmers that even know what an API is, let alone actually using it.

The belief here is that if there was enough simple access to data manipulation tools users would go out and create rich visualisations. I'm unsure if this is the case - who are they creating the visualisations for? If the data itself can be edited where is trust - how do we trust the source of data?

| No Comments | No TrackBacks
Bookmark and Share

Today I find myself at the Royal Society for a discussion meeting on Web Science: A New Frontier (Twitter #RSWebSci). This is part of a series of seminars the RS are holding as part of their 350th anniversary celebrations on science that will have a significant impact on society over the next few decades. It's an academic meeting primarily, with a stellar cast including the web supernova that is Sir Tim Berners-Lee.

Web science is a fairly new discipline, with the term only being defined in November 2006. The early academics meant the term science in the widest sense, using analytical and mathematical models to understand the mechanics of the web as an engineered construct, the shape and structure of the web, sociological models, scalabilty and the dynamic nature of links and web content. An important area is in the collective intelligence displayed on the web with examples like Wikipedia and Galaxy Zoo. Governance in the digital era also comes under the web science umbrella, encouraging transparency in government and means for citizens to connect directly with policy makers.

Albert Lazlo-Barabasi spoke on network science, which underpins many aspects of web science from the network of computers via cables, nodes, hyperlinks and networks of people and organisations. One of the earliest mathematical models of complex networks goes back to the Erdos-Renyi model from 1960. However, this relied on the network being entirely random, and real networks are not. The World Wide Web represents a huge network of over a trillion documents (in 1999!) that is not random, and it is the non-random nature that makes it suitable for mathematical analysis. The network is scale-free - no single nodes are representative of the whole. Preferential attachment is at work here - people are likely to link to "more attractive" nodes - nodes that have more links, more commonly used. Mathematically in a simple model one would expect the oldest nodes to have the most links, but this isn't observed in reality. Here the concept of "fitness" is introduced, which quickly encourage people to link. It's this concept that explains the rise of sites like Facebook.

One interested comparison is that between the "six degrees of separation" supposed to link individuals in society. With the trillions of nodes on the WWW the average number of links between the two is 19.

The discussion is quite mathematical but has some beautiful images of networks under development! The research itself was quite dated, going back to just 2001, which in web terms is old, but for a general audience (including me!) it's hard to see how much further he could have gone, but the Twitterati seem slightly disappointed...

Robert May from Oxford University looked at the interplay between the structure of networks and their dynamic behaviour, relating to ecosystems, IT networks and also financial systems. In nature there are "food webs" linking predators and prey, and limited data based on deductions from paleo data show similar structures, at least in the predator / prey ratio.

Another area where networks are manifest in biology is in infectious diseases. Applying this to HIV we see equations for the basic reproductive number - I'm liking the coefficient of new partner acquisition rate, a way of incorporating how individuals with more partners have a higher chance of contracting the virus, referred to as the "super-spreaders". Interestingly there was a discrepancy between the number of partners reported by men and women, which could be based on the old chestnut of men exaggerating sexual experience and women underplaying, but there is a powerful alternative argument that the averages are simply not catching female sex workers (themselves super spreaders). This knowledge provides a way to vaccinate more effectively through targeting the super-spreaders.

Applied to financial systems network science can examine how the collapse of a single bank can propagate through the financial system. He presented a schematic model of nodes in the interbank networks linking net worth, deposits and interbank borrowing with external assets and loans. If a bank loses enough assets, there may be a perception that the other assets possessed by the bank are devalued through "confidence effects". However these models are simple - banks are different sizes with different models (high street banks versus investment banks) that makes the systems very complex. However network theory still provides some generic principles that banking regulators could base policies on - not least that investment and retail banks should be separated.

There are caveats - for example that we can often only study sub-nets of the networks, which only represent the networks as a whole in restrictive networks. Also there are a lot of dynamic elements that cannot be considered (paper in Nature 451 893-895 from 2008). Caution must be used in applying currently popular models - degree distribution alone is not enough.

Jennifer Chayes from Microsoft presents her talk as a mathematician to whom "all the world is a graph and the people are vertices". Networks in the online world can be modelled as large finite graphs which must be correctly sampled to be useful. How can we understand and model the processes that take place? Search engines do use graph theory to index the structure of the web graph - this is the basis of page ranking. She reviewed growth models and preferential attachments and the "fitness" that Lazlo mentioned, so instead of one phase (oldest website has most links) there are two separate phases of growth where the second relates to fitness with a pattern apparently linked to Bose-Einstein condensation (?).

Competition models look at how a new vertex chooses which vertex to attach to based on competitive factors. There is also a game theory model, developed just last year where an agent "sponsors" a particular event, where there is some cost to the initiating agent but benefits to all agents that attend. Attending single events does not guarantee connections, but attendance at several of the same parties increases the likelihood. This is called the "hitch-hiker" model as it's possible to make connections at no cost just by "hitch-hiking" at these events.

As the web continues to grow it's a challenge to sample it usefully - here we need to impose limits.

Networking algorithms are big areas for commercial research - in ranking algorithms and clustering algorithms to minimise spam and to identify new products - for example Amazon's recommendation based on previous purchases and other users purchases (collaborative filtering). See also the Netflix prize.

Here again there are parallels with the spread and containment of epidemics on networks, which can be applied equally to HIV and mutating worms transmitting through the www. Where there is a limited amount of vaccine available the question is how to identify those to "cure". Contact tracing enables you to identify who to treat through identifying those with a high number of "neighbours" who could be infected - the female sex workers in the HIV example above.

Finally she discusses the use of algorithms in developing trust networks. She concluded that maths can be used understand and analyse online processes, trust, dynamics and incentives in online networks.

Jon Kleinberg looked at the analysis of large-scale social and information networks - a topic very relevant to synchrotron science given the phenomenal amount of information generated by the machines. He states the "science advances when we make the invisible become visible", and the web provides digital "visible" traces of social interactions. He gives a lovely example of plotting photos uploaded to Twitter with geotags which generates a visible map of Europe (brighter in the west than the east) where a single dot represents a photo uploaded from that location where individual cities can be identified.

Further, these photos are tagged with keywords that can be used to identify particular locations based on the textual descriptions and can be used to generate crowd sourced tourist maps based just on Flickr - as M@ has already discussed over at the Londonist.

This concept can be thought of as the information available to Martians looking down on the Earth - how could they use this data to understand society?

Mathematical models can be used to handle notions of "distance", geographically and otherwise (see Milgram's sixgrees of separation). Facebook generates rich sociological data here by "persuading millions of people to tell us about their friends and where they are". This has potential to study issues like the probability of forming new friendships based on the behaviour of existing friends, how friendships are formed, and the similarities between groups of friends. There are caveats here - both large and small groups may be assessed, and in the large groups seen online subtleties about the origins of links are lost. Also there are differences between online and offline behaviour (although these are believed to be converging).

There are diffusion curves on the adoption of behaviours in online activities - e.g. how individuals may start to edit Wikipedia articles based on how many of their friends have edited Wikipedia. This does seem to demonstrate that this is more effective the more people within the group are connected to each other - more connected friends have a greater "gravitational pull" towards an activity than those who are less connected.

The data gained from positive and negative reviews (likes / dislikes) provides additional info on how friendships extend - cf "the enemy of my enemy is my friend" - is the decision like/dislike influenced by the views of friends. Here he gives the example of how admin on Wikipedia are voted for by other admin with a like/dislike system. This impact on status - A has higher status than B if A has more positive "likes". This can display subtleties on how people evaluate each other, and on Wikipedia it seems that users with similar status evaluate each other more harshly than those who are more distant in status. The reasons for this are not well understood, even whether these observations are specific to the network, e.g. Wikipedia.

He finished a thought-provoking comparison of social networks to autopsies - online social networks make the inner workings of society, dissecting connections and analysing the anatomy. This led to the extensive knowledge of the human body that we have today - will online social networks do the same for society?

| 4 Comments | No TrackBacks
Bookmark and Share

Just a few weeks to go until Science Online London, which I am very much looking forward to, there is an exciting programme and I'm only disappointed that I can only attend on the first day, particularly missing Evan Harris and the Sci-vote discussion, and the session on researchers and social media tools.

However, the good news is I'm hoping to welcome some of the delegates to Diamond as part of the Fringe programme, where we're opening up the machine so people can have a look around the beast that is the synchrotron. And supplying tea and a glass of wine as a thank you for travelling out of London. I've been in attendance at the previous two science online london bashes and found them very interesting and useful, and as a science communicator strongly believe in taking advantage of online tools to spread knowledge between scientists and to get news out about science to a wider audience, which is why we are hoping to see delegates visiting the facility. And hopefully interesting to them too!

| No Comments | No TrackBacks
Bookmark and Share

This is a question that was posed to me recently and it's led to a lot of thought on my part.

It has been a busy few weeks. I have been helping out at the Royal Society Summer Exhibition, explaining our science to Southbank visitors (whether they like it or not!). Then I was preparing a talk on social media for science and how this could benefit the wider synchrotron community for the lightsources collaboration - this follows our February meeting at SLAC, and the latest meeting was at fellow synchrotron the ESRF in Grenoble.So the next stage was to fly out to Grenoble to give the talk and plan a future strategy based on the conference discussions (more on that later).

Next up was the SRMS conference (Synchrotron Radiation in Materials Science) hosted by Diamond. I attended all the plenary sessions and the catalysis and extreme conditions sessions, which I'm particularly interested in.

Back at Diamond I've been down on the beamlines talking to users about their research, trawling publications databases, reading scientific papers (users are publishing 10 - 15 a month now) and generally administration and project management. I'm also looking at the beamline user manuals, to see how our documentation procedures can be improved and training people on how to use the software and trouble-shooting our online systems.

And then I helped out at an E4P event. Essentially this is an initiative to get secondary school students to visit working research environments and talking to those with a physics degree about what they do. I must admit, I did have a few concerns about this - it is a very long time since my physics degree, and the kids would get to talk to real working scientists as well, who develop and run the beamlines or other parts of the machine - but also I was the only woman there, and the only one not formally a scientist any more.

As it panned out, one of the physicists was called out to fix an instrumentation problem on one of the beamlines. So there was a speed dating exercise where the school groups got to talk to each of us in turn about what we did, how we ended up where we were and what we thought about physics as a career. The question I was asked most was, "Don't you miss doing proper science?"

This is an interesting question. Up until I left my last job, even though 90% of my time was office based, I was still working in the lab at least once a week. But that was an area I specialised in (electromagnetics). Here at Diamond, whilst I have a good overview of the machine and the beamlines there is no area where I could work straight away, I would need to either do more academic study or learn on the job. The latter wouldn't be very useful to Diamond as it would mean a beamline scientist essentially training me, taking up his or her time when they have better things to do.

So the former is the better option, more academic study. I could go on to do a PhD (not doing this immediately after I graduated remains one of my greatest regrets). However, this would mean getting accepted as a mature student followed by three to four years of student living on a meagre bursary. I have seriously considered this, but it's a huge commitment, so I would have to find a subject I really loved, such as cosmology or taking quite a different tack and moving into oceanography.

But what then? I become a real working scientist once more, many steps lower on the career ladder than my contemporaries. And then I could spend my time reading scientifiic papers, training people in using instrumentation, writing talks and attending conferences. I would be documenting my work and defining processes for experiments, project managing and doing administration. And, if I'm lucky, spending 10% of my time in the lab. So would I really gain anything? Would I actually be "doing proper science" much more than I am now?

There are things I really miss about being able to call myself a working scientist. The struggle to make a new experiment work, satisfaction at identifying and solving problems that have never been seen before, and the knowledge that you are part of a global effort to help us understand the world we live in, even when things go wrong. But on a day to day basis I'm not sure that my life would be significantly different, just considerably more uncertain, competing for limited post-doc positions, finding funding and the likelihood of having to move every few years.

So do I miss doing proper science? Yes and no. I think I would be at my happiest with a compromise - continue with my current day job but spend a percentage of my time on one of the beamlines, learning on the job and supporting visiting scientists. But without being able to immerse myself completely in research, I'm not sure how useful I would actually be. And would I spend much more time on "proper science" than I do now?

| 5 Comments | No TrackBacks
Bookmark and Share

Just back from my first full day at the Royal Society's Summer Exhibition, this time held at London's South Bank. I'm something of a veteran at these events, this is my third time as an exhibitor, once before with Diamond, and before that with the National Physical Laboratory. I've been doing public engagement for many more years, but the Royal Society is somewhat less frequent than, say, the British Science Association Festival, as the competition to be represented here is much more intense.

pterosaur.jpg Pterosaurs fly outside the Southbank Royal Festival Hall

This year, as it's the 350th anniversary of the RS, there are a few more exhibitors than usual, but the quality remains particularly high. I shan't say more about our contribution, other than that there is a very nice website which describes the stuff we're showing there (I'm particularly pleased with the new synchrotron machine simulations we developed for that one). For me, whilst I like talking about what we do, I like the RS because I get to find out what other people are up to, and how they are representing their work to the public.

Perhaps getting the most media coverage are the fantastic Festo penguins. These silver monsters glide gracefully round the hall every half hour ago, stopping each new visitor in their tracks with a gentle, sinuous motion that is as close to real penguin movement as David Attenborough has even shown me. My first feeling was that, given we are now at the South Bank this particular exhibit owed more to art than science, my robotics engineer partner quick to point out that under the elegant exterior was probably a simple remote control blimp that gave the penguin the gift of flight.

However, on conversation with the engineers, there is a bit more to it that that. The penguin's forward motion is provided entirely from the flippers, and the steering from a series of thin circles that make up the nose cone. Whilst in the confines of the South Bank they are controlled remotely, in larger spaces they are apparently able to fly autonomously, sense their surroundings and communicate with each other. In addition to the silver penguins there is also a ray and a jellyfish, but apparently we don't get to see them this time around.

As well as being distracted by giant marine fauna, I did check out the stands surrounding ours. I wanted to see the flesh eating leishmaniases in the next booth but was scared off by the very enclosed exhibition space. I did make some DNA out of a concoction of peas and strawberry juice, which was an analogue for HIV in blood, where DNA is extracted and analysed in order for doctors to decide on the best clinical treatment for a particular HIV positive individual.

I also spent a while from the people at the European Extremely Large Telescope, who have a gravity defying stand made up of a series of hexagons, representing the segments of mirror they'll need to build a 48 m optical telescope in Chile, making it the biggest optical telescope in the world provided the project gets approval. I had a long chat there about why the current very large telescopes etc are all radio telescopes, what optical telescopes can see that radio telescopes can't and what advantages the new one will have over Hubble such as a fuller field of view and the ability to collect more light. These should enable it to see "first light" - the formation of early galaxies - and also to detect more exo-planets, so rather than the gas giants that make up most of the nearly 500 known exo-planets, we might be able to find smaller planets with atmospheres more like our own with the potential to support life. They gave me a very shiny sticker too.

My ideal alternative science fields (if you count Diamond as particle physics) are cosmology and oceanography, so the other exhibit of interest to me was Artica Islandica, the longest lived animal on earth. Which was an unprepossing mollusc which met it's untimely demise at the tender age of 500 at the hands of researchers at the University of St Andrews. The clam lays down tree-like rings which determine it's age, and also have the potential to provide a historical climate change record - for example thicker rings could indicate a better year for food, which could in turn provide information on climate. Not only that, the clam's advanced age could provide insights to the aging process. In contrast to some of the high impact exhibits I liked the way the basic showpiece was just the simple clam, about 10 cm in diameter and looking just like the kind of shell one could collect from an average beach.

That was about all I could take in on one day!

Disclaimer: All of the people who explained things to me were excellent, any misunderstanding or misrepresentations of the science are down to me!

| 2 Comments | No TrackBacks
Bookmark and Share

I am feeling guilty. I am not entirely to blame, but this is the guilt of inaction rather than action. You see, last Thursday my MP lost his seat by a miniscule 176 votes. My guilt stems not from not voting for him, but that I didn't do more to help the cause.

Before this election I wasn't that interested in politics. I'm not the kind of person who stays up all night to watch the votes come in, or to actively campaign for any party. I'm not loyal to any particular party, I have voted differently in different elections depending on where I lived at the time. But this was the first time I've felt really engaged with my own MP. I'm talking, of course, about Dr Evan Harris.

I don't agree with everything Harris stands for (he voted against the hunting ban for example), but there are several topics particularly close to my heart: his campaigns for libel reform (and support of the science writer Simon Singh), his stance on evidence based policy, voluntary euthanasia, his support of the researchers at Oxford and his secularist approach to politics. However, I did write to him on two occasions, and only once received a reply (and that on libel reform). On the other issue, that of wheelchair access to public services in the consituency, I received nothing.

So I can see why people didn't vote for him, as he was more engaged with national issues (where to my mind he played a vital role and will be sorely missed) than with local concerns.

But it's unlikely this is the only reason why people didn't re-elect him. There were two leaflets shoved through my door in the days before the election, one from the Animal Protection party (more here on their views), and one from the Reverend Lynda Rose, supposedly representing "concerned constitents".

The APP received 143 votes, so there doesn't seem to have been that much of an impact (although obviously people could have voted for alternative parties in response). It was the Lynda Rose leaflet that bothered me more.

The arguments have been covered well over at the Lay Scientist, and Evan Harris himself has responded in the Oxford Mail. There has also been a considerable debate in the blog pages of the Telegraph.

What I find most frustrating about so many of these articles in the inability to dissociate secularism from Christian-bashing. Whilst Evan Harris makes no secret of the fact he believes religion and politics should be separate, he does not promote attacks on Christianity itself, he simply advocates no special treatment. His stance on faith schools for example is that schools should not be able to discriminate against parents on the basis of their faith, true whether that faith is Islam, Christianity or atheism.

But for all those who object to his views, the arguments are all framed in an overly simplified fashion - Harris promotes the legalisation of euthanasia, liberalising abortion - ignoring the more complex elements and, in the case of the use of animals in experiments and the use of stem cell research, the science. Reality is not so black and white. The nickname Dr Death is symptomatic of this - different sources blame this unpleasant moniker on his record variously on animal rights, abortion and euthanasia, but it is a simple, nasty attempt to portray his views in the most negative light.

I find it hard to understand why the separation of church and state is so controversial. My sister is a vicar: I am an atheist and skeptic. Admittedly she is a very liberal vicar, but nevertheless she and I hold very similar political views, particularly on human rights - our religious differences simply aren't relevant where politics is concerned. For me, if I ask my MP what religion they are, the best response from my perspective is simply, "That's nothing to do with my politics."

So whilst I'm not alone in being disappointed that as an MP Harris didn't do enough for his constituency, I do believe that the negative campaigns against him were out of line, and I hope we do see his return to Parliament, standing up for the things I believe in. And in the next election, I might just end up campaigning for him.

| 7 Comments | No TrackBacks
Bookmark and Share

Today (12 May 2010) is the 100th anniversary of the birth of Britains only female Nobel prize winner, Dorothy Hodgkin. With a career spanning seven decades, she pioneered the field of structural biology, particularly the use of X-rays to establish crystal structure.

Five years ago, I had hardly heard of Dorothy, but since starting at Diamond, I've become very familiar with her work and her legacy. When Diamond started up in January 2007 three of the first seven beamlines were dedicated to crystallography, now there are five, and this decision was made based on the demands of the UK structural biology community, of which she was a founder.

A few months ago I started researching Dorothy in a bit more detail (see this post for more), and put together an article on the Diamond website celebrating her legacy. As part of this, I've been reading Georgina Ferry's excellent autobiography, seen a play based on the book, read a range of scientific and historical papers and spoken to a few of those who knew her. What is striking is not just what she achieved, but how well known and respected she was, whilst remaining incredibly modest.

She had a unique upbringing, which must have helped prepare her for a life in a male dominated world, but that alone cannot account for her acheivements. I was told on more than one occasion not to refer to her by her surname - "everyone called her Dorothy" - and that she never referred to herself as Doctor. She never engaged in self-promotion despite managing astonishing acheivements alongside bringing up four children, dealing with rheumatoid arthritis and being an international campaigner for peace.

So here is my article, but, not surprisingly, I'm not the only one writing about her today - the BBC have covered the story, there is a profile on Women's Hour and the Royal Society have an event tonight. In putting the article together I've learnt a lot, but also realised how much I don't know, and how much I would like to have been privileged to meet her.

| No Comments | No TrackBacks