Who knew we still had low-hanging fruits?

Earlier this month I had the pleasure of attending the Web Engines Hackfest, hosted by Igalia at their offices in A Coruña, and also sponsored by my employer, Collabora, Google and Mozilla. It has grown a lot and we had many new people this year.

Fun fact: I am one of the 3 or 4 people who have attended all of the editions of the hackfest since its inception in 2009, when it was called WebKitGTK+ hackfest \o/

20171002_204405

It was a great get together where I met many friends and made some new ones. Had plenty of discussions, mainly with Antonio Gomes and Google’s Robert Kroeger, about the way forward for Chromium on Wayland.

We had the opportunity of explaining how we at Collabora cooperated with igalians to implemented and optimise a Wayland nested compositor for WebKit2 to share buffers between processes in an efficient way even on broken drivers. Most of the discussions and some of the work that led to this was done in previous hackfests, by the way!

20171002_193518

The idea seems to have been mostly welcomed, the only concern being that Wayland’s interfaces would need to be tested for security (fuzzed). So we may end up going that same route with Chromium for allowing process separation between the UI and GPU (being renamed Viz, currently) processes.

On another note, and going back to the title of the post, at Collabora we have recently adopted Mattermost to replace our internal IRC server. Many Collaborans have decided to use Mattermost through an Epiphany Web Application or through a simple Python application that just shows a GTK+ window wrapping a WebKitGTK+ WebView.

20171002_101952

Some people noticed that when the connection was lost Mattermost would take a very long time to notice and reconnect – its web sockets were taking a long, long time to timeout, according to our colleague Andrew Shadura.

I did some quick searching on the codebase and noticed WebCore has a NetworkStateNotifier interface that it uses to get notified when connection changes. That was not implemented for WebKitGTK+, so it was likely what caused stuff to linger when a connection hiccup happened. Given we have GNetworkMonitor, implementation of the missing interfaces required only 3 lines of actual code (plus the necessary boilerplate)!

screenshot-from-2017-10-16-11-13-39

I was surprised to still find such as low hanging fruit in WebKitGTK+, so I decided to look for more. Turns out WebCore also has a notifier for low power situations, which was implemented only by the iOS port, and causes the engine to throttle some timers and avoid some expensive checks it would do in normal situations. This required a few more lines to implement using upower-glib, but not that many either!

That was the fun I had during the hackfest in terms of coding. Mostly I had fun just lurking in break out sessions discussing the past, present and future of tech such as WebRTC, Servo, Rust, WebKit, Chromium, WebVR, and more. I also beat a few challengers in Street Fighter 2, as usual.

I’d like to say thanks to Collabora, Igalia, Google, and Mozilla for sponsoring and attending the hackfest. Thanks to Igalia for hosting and to Collabora for sponsoring my attendance along with two other Collaborans. It was a great hackfest and I’m looking forward to the next one! See you in 2018 =)

A tale of cylinders and shadows

Like I wrote before, we at Collabora have been working on improving WebKitGTK+ performance for customer projects, such as Apertis. We took the opportunity brought by recent improvements to WebKitGTK+ and GTK+ itself to make the final leg of drawing contents to screen as efficient as possible. And then we went on investigating why so much CPU was still being used in some of our test cases.

The first weird thing we noticed is performance was actually degraded on Wayland compared to running under X11. After some investigation we found a lot of time was being spent inside GTK+, painting the window’s background.

Here’s the thing: the problem only showed under Wayland because in that case GTK+ is responsible for painting the window decorations, whereas in the X11 case the window manager does it. That means all of that expensive blurring and rendering of shadows fell on GTK+’s lap.

During the web engines hackfest, a couple of months ago, I delved deeper into the problem and noticed, with Carlos Garcia’s help, that it was even worse when HiDPI displays were thrown into the mix. The scaling made things unbearably slower.

You might also be wondering why would painting of window decorations be such a problem, anyway? They should only be repainted when a window changes size or state anyway, which should be pretty rare, right? Right, that is one of the reasons why we had to make it fast, though: the resizing experience was pretty terrible. But we’ll get back to that later.

So I dug into that, made a few tries at understanding the issue and came up with a patch showing how applying the blur was being way too expensive. After a bit of discussion with our own Pekka Paalanen and Benjamin Otte we found the root cause: a fast path was not being hit by pixman due to the difference in scale factors on the shadow mask and the target surface. We made the shadow mask scale the same as the surface’s and voilà, sane performance.

I keep talking about this being a performance problem, but how bad was it? In the following video you can see how huge the impact in performance of this problem was on my very recent laptop with a HiDPI display. The video starts with an Epiphany window running with a patched GTK+ showing a nice demo the WebKit folks cooked for CSS animations and 3D transforms.

After a few seconds I quickly alt-tab to the version running with unpatched GTK+ – I made the window the exact size and position of the other one, so that it is under the same conditions and the difference can be seen more easily. It is massive.

Yes, all of that slow down was caused by repainting window shadows! OK, so that solved the problem for HiDPI displays, made resizing saner, great! But why is GTK+ repainting the window even if only the contents are changing, anyway? Well, that turned out to be an off-by-one bug in the code that checks whether the invalidated area includes part of the window decorations.

If the area being changed spanned the whole window width, say, it would always cause the shadows to be repainted. By fixing that, we now avoid all of the shadow drawing code when we are running full-window animations such as the CSS poster circle or gtk3-demo’s pixbufs demo.

As you can see in the video below, the gtk3-demo running with the patched GTK+ (the one on the right) is using a lot less CPU and has smoother animation than the one running with the unpatched GTK+ (left).

Pretty much all of the overhead caused by window decorations is gone in the patched version. It is still using quite a bit of CPU to animate those pixbufs, though, so some work still remains. Also, the overhead added to integrate cairo and GL rendering in GTK+ is pretty significant in the WebKitGTK+ CSS animation case. Hopefully that’ll get much better from GTK+ 4 onwards.

Web Engines Hackfest 2016!

I had a great time last week and the web engines hackfest! It was the 7th web hackfest hosted by Igalia and the 7th hackfest I attended. I’m almost a local Galician already. Brazilian Portuguese being so close to Galician certainly helps! Collabora co-sponsored the event and it was great that two colleagues of mine managed to join me in attendance.

It had great talks that will eventually end up in videos uploaded to the web site. We were amazed at the progress being made to Servo, including some performance results that blew our minds. We also discussed the next steps for WebKitGTK+, WebKit for Wayland (or WPE), our own Clutter wrapper to WebKitGTK+ which is used for the Apertis project, and much more.

Zan giving his talk on WPE (former WebKitForWayland)
Zan giving his talk on WPE (former WebKitForWayland)

One thing that drew my attention was how many Dell laptops there were. Many collaborans (myself included) and igalians are now using Dells, it seems. Sure, there were thinkpads and macbooks, but there was plenty of inspirons and xpses as well. It’s interesting how the brand make up shifted over the years since 2009, when the hackfest could easily be mistaken with a thinkpad shop.

Back to the actual hackfest: with the recent release of Gnome 3.22 (and Fedora 25 nearing release), my main focus was on dealing with some regressions suffered by users experienced after a change that made putting the final rendering composited by the nested Wayland compositor we have inside WebKitGTK+ to the GTK+ widget so it is shown on the screen.

One of the main problems people reported was applications that use WebKitGTK+ not showing anything where the content was supposed to appear. It turns out the problem was caused by GTK+ not being able to create a GL context. If the system was simply not able to use GL there would be no problem: WebKit would then just disable accelerated compositing and things would work, albeit slower.

The problem was WebKit being able to use an older GL version than the minimum required by GTK+. We fixed it by testing that GTK+ is able to create GL contexts before using the fast path, falling back to the slow glReadPixels codepath if not. This way we keep accelerated compositing working inside WebKit, which gives us nice 3D transforms and less repainting, but take the performance hit in the final “blit”.

Introducing "WebKitClutterGTK+"
Introducing “WebKitClutterGTK+”

Another issue we hit was GTK+ not properly updating its knowledge of the window’s opaque region when painting a frame with GL, which led to some really interesting issues like a shadow appearing when you tried to shrink the window. There was also an issue where the window would not use all of the screen when fullscreen which was likely related. Both were fixed.

André Magalhães also worked on a couple of patches we wrote for customer projects and are now pushing upstream. One enables the use of more than one frontend to connect to a remote web inspector server at once. This can be used to, for instance, show the regular web inspector on a browser window and also use IDE integration for setting breakpoints and so on.

The other patch was cooked by Philip Withnall and helped us deal with some performance bottlenecks we were hitting. It improves the performance of painting scroll bars. WebKitGTK+ does its own painting of scrollbars (we do not use the GTK+ widgets for various reasons). It turns out painting scrollbars can be quite a hit when the page is being scrolled fast, if not done efficiently.

Emanuele Aina had a great time learning more about meson to figure out a build issue we had when a more recent GStreamer was added to our jhbuild environment. He came out of the experience rather sane, which makes me think meson might indeed be much better than autotools.

Igalia 15 years cake
Igalia 15 years cake

It was a great hackfest, great seeing everyone face to face. We were happy to celebrate Igalia’s 15 years with them. Hope to see everyone again next year =)

WebKitGTK+ 2.14 and the Web Engines Hackfest

Next week our friends at Igalia will be hosting this year’s Web Engines Hackfest. Collabora will be there! We are gold sponsors, and have three developers attending. It will also be an opportunity to celebrate Igalia’s 15th birthday \o/. Looking forward to meet you there! =)

Carlos Garcia has recently released WebKitGTK+ 2.14, the latest stable release. This is a great release that brings a lot of improvements and works much better on Wayland, which is becoming mature enough to be used by default. In particular, it fixes the clipboard, which was one of the main missing features, thanks to Carlos Garnacho! We have also been able to contribute a bit to this release =)

One of the biggest changes this cycle is the threaded compositor, which was implemented by Igalia’s Gwang Yoon Hwang. This work improves performance by not stalling other web engine features while compositing. Earlier this year we contributed fixes to make the threaded compositor work with the web inspector and fixed elements, helping with the goal of enabling it by default for this release.

Wayland was also lacking an accelerated compositing implementation. There was a patch to add a nested Wayland compositor to the UIProcess, with the WebProcesses connecting to it as Wayland clients to share the final rendering so that it can be shown to screen. It was not ready though and there were questions as to whether that was the way to go and alternative proposals were floating around on how to best implement it.

At last year’s hackfest we had discussions about what the best path for that would be where collaborans Emanuele Aina and Daniel Stone (proxied by Emanuele) contributed quite a bit on figuring out how to implement it in a way that was both efficient and platform agnostic.

We later picked up the old patchset, rebased on the then-current master and made it run efficiently as proof of concept for the Apertis project on an i.MX6 board. This was done using the fancy GL support that landed in GTK+ in the meantime, with some API additions and shortcuts to sidestep performance issues. The work was sponsored by Robert Bosch Car Multimedia.

Igalia managed to improve and land a very well designed patch that implements the nested compositor, though it was still not as efficient as it could be, as it was using glReadPixels to get the final rendering of the page to the GTK+ widget through cairo. I have improved that code by ensuring we do not waste memory when using HiDPI.

As part of our proof of concept investigation, we got this WebGL car visualizer running quite well on our sabrelite imx6 boards. Some of it went into the upstream patches or proposals mentioned below, but we have a bunch of potential improvements still in store that we hope to turn into upstreamable patches and advance during next week’s hackfest.

One of the improvements that already landed was an alternate code path that leverages GTK+’s recent GL super powers to render using gdk_cairo_draw_from_gl(), avoiding the expensive copying of pixels from the GPU to the CPU and making it go faster. That improvement exposed a weird bug in GTK+ that causes a black patch to appear when shrinking the window, which I have a tentative fix for.

We originally proposed to add a new gdk_cairo_draw_from_egl() to use an EGLImage instead of a GL texture or renderbuffer. On our proof of concept we noticed it is even more efficient than the texturing currently used by GTK+, and could give us even better performance for WebKitGTK+. Emanuele Bassi thinks it might be better to add EGLImage as another code branch inside from_gl() though, so we will look into that.

Another very interesting igalian addition to this release is support for the MemoryPressureHandler even on systems with no cgroups set up. The memory pressure handler is a WebKit feature which flushes caches and frees resources that are not being used when the operating system notifies it memory is scarce.

We worked with the Raspberry Pi Foundation to add support for that feature to the Raspberry Pi browser and contributed it upstream back in 2014, when Collabora was trying to squeeze as much as possible from the hardware. We had to add a cgroups setup to wrap Epiphany in, back then, so that it would actually benefit from the feature.

With this improvement, it will benefit even without the custom cgroups setups as well, by having the UIProcess monitor memory usage and notify each WebProcess when memory is tight.

Some of these improvements were achieved by developers getting together at the Web Engines Hackfest last year and laying out the ground work or ideas that ended up in the code base. I look forward to another great few days of hackfest next week! See you there o/

Web Engines Hackfest 2014

For the 6th year in a row, Igalia has organized a hackfest focused on web engines. The 5 years before this one were actually focused on the GTK+ port of WebKit, but the number of web engines that matter to us as Free Software developers and consultancies has grown, and so has the scope of the hackfest.

It was a very productive and exciting event. It has already been covered by Manuel RegoPhilippe Normand, Sebastian Dröge and Andy Wingo! I am sure more blog posts will pop up. We had Martin Robinson telling us about the new Servo engine that Mozilla has been developing as a proof of concept for both Rust as a language for building big, complex products and for doing layout in parallel. Andy gave us a very good summary of where JS engines are in terms of performance and features. We had talks about CSS grid layouts, TyGL – a GL-powered implementation of the 2D painting backend in WebKit, the new Wayland port, announced by Zan Dobersek, and a lot more.

With help from my colleague ChangSeok OH, I presented a description of how a team at Collabora led by Marco Barisione made the combination of WebKitGTK+ and GNOME’s web browser a pretty good experience for the Raspberry Pi. It took a not so small amount of both pragmatic limitations and hacks to get to a multi-tab browser that can play youtube videos and be quite responsive, but we were very happy with how well WebKitGTK+ worked as a base for that.

One of my main goals for the hackfest was to help drive features that were lingering in the bug tracker for WebKitGTK+. I picked up a patch that had gone through a number of iterations and rewrites: the HTML5 notifications support, and with help from Carlos Garcia, managed to finish it and land it at the last day of the hackfest! It provides new signals that can be used to authorize notifications, show and close them.

To make notifications work in the best case scenario, the only thing that the API user needs to do is handle the permission request, since we provide a default implementation for the show and close signals that uses libnotify if it is available when building WebKitGTK+. Originally our intention was to use GNotification for the default implementation of those signals in WebKitGTK+, but it turned out to be a pain to use for our purposes.

GNotification is tied to GApplication. This allows for some interesting features, like notifications being persistent and able to reactivate the application, but those make no sense in our current use case, although that may change once service workers become a thing. It can also be a bit problematic given we are a library and thus have no GApplication of our own. That was easily overcome by using the default GApplication of the process for notifications, though.

The show stopper for us using GNotification was the way GNOME Shell currently deals with notifications sent using this mechanism. It will look for a .desktop file named after the application ID used to initialize the GApplication instance and reject the notification if it cannot find that. Besides making this a pain to test – our test browser would need a .desktop file to be installed, that would not work for our main API user! The application ID used for all Web instances is org.gnome.Epiphany at the moment, and that is not the same as any of the desktop files used either by the main browser or by the web apps created with it.

For the future we will probably move Epiphany towards this new era, and all users of the WebKitGTK+ API as well, but the strictness of GNOME Shell would hurt the usefulness of our default implementation right now, so we decided to stick to libnotify for the time being.

Other than that, I managed to review a bunch of patches during the hackfest, and took part in many interesting discussions regarding the next steps for GNOME Web and the GTK+ and Wayland ports of WebKit, such as the potential introduction of a threaded compositor, which is pretty exciting. We also tried to have Bastien Nocera as a guest participant for one of our sessions, but it turns out that requires more than a notebook on top of a bench hooked up to   a TV to work well. We could think of something next time ;D.

I’d like to thank Igalia for organizing and sponsoring the event, Collabora for sponsoring and sending ChangSeok and myself over to Spain from far away Brazil and South Korea, and Adobe for also sponsoring the event! Hope to see you all next year!

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

Yay, the left won! Or did it?

Originally published on politi.kov

I have been asked by a bunch of friends from outside of Brazil for my opinion regarding the recent elections we had in Brazil, and it is a bit complicated to explain it without some background, so I decided to write this piece providing a bit of history so that people can understand my opinion.

The elections this year were a rematch of our traditional polarization between the workers party (PT) and the social democracy party (PSDB), which has been going on since 1994. PT and PSDB used to be allies. In the 80s, when the dictatorship dropped the law that forbade more than 2 parties, the opposition party, MDB, began breaking up in several smaller ones.

PSDB was founded by politicians and intelectuals who were inspired by Europe’s social democracy and political systems. Parliamentarism, for instance, is one of the historical causes of the party. The workers party had a more grassroots origin, with union leaders, marxist intelectuals and marxist-inspired catholic priests being the main founders. They drew their inspiration from the USSR and Cuba, and were very close to social movements.

Lula and FHC campaigning together in 1981, by Clóvis Cranchi Sobrinho
Lula (PT) and FHC (PSDB) campaigning together in 1981, by Clóvis Cranchi Sobrinho

Some people have celebrated the reelection of Dilma Roussef as a victory of the left against the right. In my opinion that view is wrong for several reasons. First, because I disagree that PSDB and Aécio Neves in particular are right-wing, both in terms of economics and social/moral issues. Second, because I believe Dilma’s first government has taken a quite severe turn to the right in several topics that matter a lot to me. Since comparisons with PSDB’s government during the 90s has been one of the main strategies of the campaign this year, I’ll argue why I think it was actually a pretty good government with a lot of left in it.

Unlike what happens in most other places, Brazil does not really have an actual right-wing party, economics-wise. Although we might see the birth of a couple in the near future, no current party is really against public health, education and social security being provided by the state as rights, or wants to decrease state size and lower taxes significantly. It should come as no surprise that even though it has undergone a lot of liberal reforms over the last 20 years, Brazil is still a very closed country, with very high import tariffs and a huge presence of the state in the economy. There is a certain consensus about all of that, with disagreements being essentially on implementation details, not goals.

On the other hand, and contrary to popular belief, when it comes to social and moral issues we are a very conservative people. Ironically, the two parties which have been in power over the last 20 years are quite progressive, being historically proponents of diversity, minorities rights, reproductive rights. They have had to compromise on those causes to become viable alternatives, given the conservative nature of the majority of the voters.

Despite their different origins and beliefs, both parties share socialist inclinations and were allies from the onset. That changed in 1992, when president Collor, who had been elected on a runoff against Lula (who PSDB supported), was impeached by Congress for corruption. With no formal political support and a chaotic situation in his hands, Itamar Franco, the vice president, called for a “national union” government to go through the last two years of his term. PSDB answered the call, but the workers party decided against being part of the government.

Fernando Henrique Cardoso, a sociologist who was one of the leaders of PSDB was chosen to lead the Foreign Relations Ministry, but a few months later got nominated to the Economy. At the time, Brazil lived under hyperinflation of close to 1000% a year, and several stabilization plans had been attempted. Economy Ministers did not last very much in office at the time. FHC gathered a team of economists and sponsored their stabilization plan, which turned out to be highly successful: the Plano Real (“Real Plan”). In addition to introducing a new currency, something that was becoming pretty common to Brazilians by then, it also attacked the structural causes of inflation.

Lula was counting on the failure of the Plano Real when he ran against FHC in 1994, but the plan succeeded, giving FHC two terms as president. During those two terms, FHC introduced several institutional changes that made Brazil a saner country. In addition to the hyperinflation, Brazil had lived a debt crises for decades and was still in default. FHC’s team renegotiated the debts, reopened lines of credit, but most importantly, introduced reforms that made the Brazilian finances and financial system credible.

The problem was not even that Brazil had a fiscal déficit, it just did not have any control whatsoever of money supply and budget. Banks, regardless of whether they were private or public, had very little regulation and took advantage of the hyperinflation to hide monstrous holes in their balances. When inflation was gone and regulation became more strict, those became apparent, and it was pretty clear that the system would collapse if nothing was done.

Some people like to say that FHC was a president who ruled for the rich and didn’t care about the poor. I think the way the potential collapse of the banking system was handled is a great counter-example of that. The government passed laws that made the owners of the banks responsible for the financial problems, regardless of whether caused by mismanagement or fraud. If a bank went under, the central bank intervened and added enough money to protect the deposits, but that money was a loan that had to be repaid by the owners of the bank, and the owners’ properties were added as collateral to the loan. As a brazilian journalist once said, the people did not risk losing their deposits, the bankers did risk losing the banks, though. Today, we have a separate fund, filled with money from the banks, that does what the central bank did back then when required.

Compare that to countries where the banking system was saved with tax payer money and executives kept getting huge bonuses regardless, while owners kept their profits. It is hard to find an initiative that is more focused on the public interest against the interest of the rich people who caused the problem. This legislation, called PROER, is still in place today, and it came along with solid regulation of the banking system. It should come as no surprise that Brazil went through the financial crisis of 2008 with not a single hiccup of the banking system and no fear of bank runs. Despite having been against PROER back in the day, Lula celebrated its existence in 2008, when it was clear it was one of the reasons we would not suffer much. He even advertised it as something that should be adopted by the US and Europe.

It is also pretty common to hear that under FHC social questions were not a priority. I believe it is pretty simple to see that that was not the case both by inspecting the growth of social spending and the improvement of social indicators for the period, such as UN’s human development index. One area in which people are particularly critical of the FHC government is the investment on higher education, and they are actually quite right. Brazil has free Federal universities and those did not get a lot of priority in the 90s. However, I would argue that while it is a matter of priorities, it is not one of education versus something else, but rather of what to invest on inside education. The reality is basic education was the priority.

When FHC came to power, Brazil had a significant number of children who were not going to school at all. The goal was to make access to schools universal for young children, and that goal was reached. Every child has been going to school since the early 2000s, and that is a significant achievement which reaches the poorest. While the federal universities are attended essentially by the Brazilian elite, given the difficulty of passing the exams and the relative lack of quality of free public schools compared to private ones, which is still a reality to this day, investment on getting children to even go to school for the early years has a significant impact on the lives of the poorest.

It is important to remember that getting every child to go to school is also what gave birth to one of the most celebrated programs from the Lula era: Bolsa Família (“Family Allowance”) is a direct money transfer to poor families, particularly those who have children and has been an important contribution to lowering inequality and getting people out of extreme poverty. To get the money, the families need to ensure their children are 1) attending school and 2) getting vaccinated.

That program comes from the FHC government, in which it was created with the name Bolsa Escola (“School Allowance”), in its turn inspired by a program of the same name by governor Cristovam Buarque, from PT. What Lula did, and he deserves a lot of credit for this, was to merge a series of smaller programs with Bolsa Escola, and then expand the program to ensure it got to more and more people. Interestingly, during the announcement of the program he credited the idea of doing that to a state governor from PSDB. You can see why I think these two should be allies again.

When faced with all these arguments, people will eventually say that FHC was bad because he privatized companies and used orthodox economic policies. Well, if that is what it takes, then we’ll have to take Lula down with him, because his first term was essentially a continuation of FHC’s second term: orthodox economic policies to keep inflation down, along with privatization of several state-owned companies and banks. But Lula, whom I voted for and whose government I believe was a good one, is not my subject: Dilma is.

On Lula’s second term, Dilma gained a lot of power when other major leaders of PT went down for corruption. She became second in command and started leading several programs. A big believer in developmentalism, she started pushing for a bigger role of the state in the coordination of the productive sector, with a clear focus on growing the industrial base.

One of the initiatives she sponsored was a sizable increase on the number and size of subsidized loans given out by the national development bank (BNDES). Brazil started an unnofficial “national champions” program, where the government elected a few big companies to get a huge amount of subsidized credit.

The goal was for these selected firms to get big enough to be competitive on the global market. The criteria for the choices is completely opaque, if it even exists, and includes handing out milions in subsidized credit for Eike Batista, who became Brazil’s richest enterpreneur for a while, and lost pretty much everything when it became clear the oil would not be pumping out of his camps after all, sinking with them a huge amount of public funds invested by BNDES.

The way this policy was enacted, it is unclear how much it really costs in terms of public funds: the Brazilian treasury emits debt to capitalize, lends that money to BNDES with higher than market interest, and BNDES then lends it out to the big companies with a lower than market interest rate. Although it is obviously unsustainable, the problem does not yet show in the balance because the grace period for BNDES debt with the treasury is 2040. The fact that this has a cost and, perhaps more importantly, a huge opportunity cost is not clear because it is not part of the government budget. Why are we putting money in this rather than quadrupling Bolsa Família, which studies show generates 1,78 reais in GDP for every 1 real invested? Worse, why are we not even updating Bolsa Família enough to cover inflation?

When Dilma got elected in 2010, the first signs were pretty bad. She was already seen as someone who did not care much for the environment, and on her first month in power she made good on that promise by pushing to get the Belo Monte Dam building started as soon as possible regardless of conditionalities being satisfied. To this day there are several issues with how the building of the dam is going: the handling of the indigenous people and the small city nearby are lacking, conditionalities are not met.

Beyond Belo Monte, indigenous leaders are being assassinated, deforestation in the Amazon forest has increased by 122% in 2014 alone. Dilma’s answer to people who question her on these kinds of issues is essentially: “would you rather not have electric power?”

Her populist authoritarian nature and obsession with industry are also pretty evident when it comes to her policies in the energy area as a whole. She showed up in national tv on the eve of our independence day celebration to announce a reduction in electric tariffs, mainly for industry, but also for homes. Nobody really knew how. The following week she sent a fast-track project to Congress to automatically renew concessions of power grid operators, requiring those who accepted it to lower tariffs, instead of doing an auction, which was already necessary anyway because the concessions were up on 2015. There was no discussion with stakeholders, there was just a populist announcement and a great deal of rhethoric to paint anyone who opposed as being against the people.

And now, everything went into the crapper because that represented a breach of contract that required indemnification, and we had a pretty bad drought that made power more expensive given the need to turn on the thermal generators. Combining the costs of the thermal generation, indemnity, and financial fallout that the grid operators suffered, we are already at 105 billion reais and counting, nobody knows how high the cost will reach. Any reduction in tariffs has long been invalidated. And the fact that industry has lowered production significantly ends up being good news, we would probably be under rationing already if that was not the case.

You would expect someone who fought a dictatorship to be pretty good in terms of human and civil rights. What we see in reality is a lack of respect for those things. During the world cup, Dilma has put the army on the streets and has supported arbitrary behaviour from state polices throughout the country. They jailed a bunch of demonstrators preemptively. No shit. The would be demonstrators were kept in jail throghout the tournament under false accusations. Dilma’s Minister of Justice said several times that the case against them was solid and that the arrests were legal, but it turned out the case simply did not exist. Just this week we had a number of executions orchestrated by policemen in the state of Pará and there is zero reaction from the federal government.

In the oil industry, Dilma has enacted a policy of subsidizing gas prices by using a fixed price that used to be lower than the international prices (it is no longer the case with the fall in international prices). That would not be a problem if Brazil was selfsufficient in oild and gas, which we are not: we had to import a significant amount of both. The implicit subsidy cost Petrobrás a huge amount of cash – the more gas it sold, the bigger the losses. This lead not only to decreasing the company’s market value (it is a state-controlled, but open company), but to reducing its capacity of investment as well.

That is more problematic than it sounds because, with our current concession model, every single oil camp needs to have Petrobrás as a member of the consortium. Limiting the company’s investment capacity limits the rate at which our pre-salt oil camps can be explored and thus the speed at which we can become selfsufficient. Chicken and egg anyone?

To make things worse, Dilma has made policies that lowered taxes on car production, used to foster economic activity during the crisis in 2008-2010, essentially permanent. This lead to a significant increase in traffic and polution on Brazilian cities, while at the same time increasing the pressure on Petrobrás, which had to import more and more gas. Meanwhile, Brazilian cities suffer from a severe lack of mobility infrastructure. A recent study has shown that Brazil has spend almost twice as much subsidized money on pro-car policies than on pro-mass transit projects. Talk about good usage of public funds.

One of the only remaining good news the government was still able to mention was the constant reduction in extreme poverty. Dilma was actually ellected promising to erradicate extreme poverty and changed the government’s slogan to “A rich country is a country with no poverty” (País rico é país sem pobreza). Well, it turns out all of these policies caused inequality and extreme poverty both to stop falling as of 2013. And given the policies were actually deepened in 2014, I believe it is very likely we’ll see an increase in both when we get the data for 2014, next year.

Other than that, her policies ended up being a complete failure. Despite giving tax benefits to several sectors, investment has fallen, growth has fallen and inflation is quite high at 6,6% for the last 12 months. In terms of minorities, her government has been a severe set back, with the government going back on educational material against homophoby saying it would not do “advertisement of sexual choice”, and going back on a decree that allowed the public health system to perform abortions on the cases allowed by the law (essentially if the woman has been raped).

Looking at Dilma’s policies, I really can’t see that much of the left, honestly. So why, you might ask, has this victory been deemed a victory of the left over the right? My explanation is the aura the workers party still manages to keep over itself. There’s a notion that whatever PT does, it will still be more to the left than PSDB, which I think is just crazy.

There is also a fair amount of idealizing Dilma just because she is Lula’s protegé. People will forgive anything, provided it is the workers party doing it. Thankfully, the number of people aligned on the left that supported the candidate from PSDB this election tells me this is changing quite rapidly. Hopefully that leads to PT having to reinvent itself, and get in touch with the left again.

QtWebKit is no more, what now?

Driven by the technical choices of some of our early clients, QtWebKit was one of the first web engines Collabora worked on, building the initial support for NPAPI plugins and more. Since then we had kept in touch with the project from time to time when helping clients with specific issues, hardware or software integration, and particularly GStreamer-related work.

With Google forking Blink off WebKit, a decision had to be made by all vendors of browsers and platform APIs based on WebKit on whether to stay or follow Google instead. After quite a bit of consideration and prototyping, the Qt team decided to take the second option and build the QtWebEngine library to replace QtWebKit.

The main advantage of WebKit over Blink for engine vendors is the ability to implement custom platform support. That meant QtWebKit was able to use Qt graphics and networking APIs and other Qt technologies for all of the platform-integration needs. It also enjoyed the great flexibility of using GStreamer to implement HTML5 media. GStreamer brings hardware-acceleration capabilities, support for several media formats and the ability to expand that support without having to change the engine itself.

People who are using QtWebKit because of its being Gstreamer-powered will probably be better served by switching to one of the remaining GStreamer-based ports, such as WebKitGTK+. Those who don’t care about the underlying technologies but really need or want to use Qt APIs will be better served by porting to the new QtWebEngine.

It’s important to note though that QtWebEngine drops support for Android and iOS as well as several features that allowed tight integration with the Qt platform, such as DOM manipulation through the QWebElement APIs, making QObject instances available to web applications, and the ability to set the QNetworkAccessManager used for downloading resources, which allowed for fine-grained control of the requests and sharing of cookies and cache.

It might also make sense to go Chromium/Blink, either by using the Chrome Content API, or switching to one its siblings (QtWebEngine included) if the goal is to make a browser which needs no integration with existing toolkits or environments. You will be limited to the formats supported by Chrome and the hardware platforms targeted by Google. Blink does not allow multiple implementations of the platform support layer, so you are stuck with what upstream decides to ship, or with a fork to maintain.

It is a good alternative when Android itself is the main target. That is the technology used to build its main browser. The main advantage here is you get to follow Chrome’s fast-paced development and great support for the targeted hardware out of the box. If you need to support custom hardware or to be flexible on the kinds of media you would like to support, then WebKit still makes more sense in the long run, since that support can be maintained upstream.

At Collabora we’ve dealt with several WebKit ports over the years, and still actively maintain the custom WebKit Clutter port out of tree for clients. We have also done quite a bit of work on Chromium-powered projects. Some of the decisions you have to make are not easy and we believe we can help. Not sure what to do next? If you have that on your plate, get in touch!

Entenda o systemd vs upstart

Originalmente publicado no PoliGNU.

Por quê mudar?

Recentemente o projeto Debian passou por um intenso debate para decidir que sistema de inicialização deveria substituir o venerável (porém antiquado) system v como padrão do sistema para a próxima release, codename jessie.
A razão para mudar pode não parecer óbvia, especialmente para os muito apegados à ideia de “UNIX” ou que simplesmente estão acostumados a como as coisas funcionam hoje, mas fica clara quando se analisa os principais problemas enfrentados por sistemas que ainda usam sistemas de init system v. Pra começo de conversa, o init não conhece muitas das novidades tecnológicas que apareceram nas últimas décadas.
Peguemos como exemplo o controle de hardware: um daemon que precise ser iniciado quando um determinado tipo de hardware é plugado não será executado automaticamente pelo init se for plugado, o que exige que além de ter lógica para tratar o caso no boot, também se tenha que ter outros mecanismos para reagir ao hot-plugging.
Ser um conjunto de scripts shell também causa uma infinidade de problemas. Como cada script deve reinventar a inicialização do daemon que controla, é bastante comum que eles não sejam 100% resilientes a problemas como deixar processos (principais ou filhos) para trás quando quem administra o servidor pede que o serviço seja parado. Isso acaba exigindo intervenção de quem administra, matando manualmente os processos.
O desempenho também é um problema que passou muito tempo sem solução. Com o aparecimento de computadores com múltiplas CPUs e com espera de rede se tornando um problema foi inventado um sistema de paralelizar a execução dos scripts de inicialização, o que trouxe consigo a enorme complexidade de ter que estabelecer dependências entre os scripts, para garantir que só sejam executados em paralelo scripts que possam sê-lo. Mas isso também é muito mais complexo do que parece – as dependências em sistemas modernos são muito mais sutis e variáveis do que pode parecer em primeira análise. Além disso, executar cada script desse gera um overhead não desprezível.
Por último, o diagnóstico de falhas de inicialização de serviços é absurdamente complexo; raramente se sabe em que arquivo de log estarão as mensagens relevantes e com alguma frequência sequer haverá logs da falha propriamente dita em algum lugar, restando a quem administra executar o serviço manualmente e ver o que acontece.
Para resolver alguns ou todos esses problemas vieram os novos sistemas de init. Na verdade o Mac OS X saiu na frente com o launchd, mas essa é história pra uma outra oportunidade, comecemos pelo upstart, que nasceu no Ubuntu há alguns anos atrás.

Upstart

O upstart surgiu quando a principal preocupação das pessoas estava em melhorar o desempenho do boot através de paralelismo. A ideia é substituir a necessidade de estabelecer dependência entre os serviços de forma declarativa e ao invés disso estabelecer condições para que um serviço esteja “pronto” para ser executado.
No sistema de dependências inventado para o system v começa pelo objetivo: preciso rodar rodar o procedimento que monta sistemas de arquivo remotos, pra isso preciso rodar as dependências dele que são montar os sistemas de arquivo locais e estabelecer conexão de rede. O upstart inverte essa lógica e começa dos procedimentos que tem nenhuma dependência, o que vai tornando outros procedimentos “prontos” e os executa a partir daí.
A ideia é que o sistema de init reage a eventos, que podem ser um serviço estar estabelecido mas também podem ser de outra natureza: um hardware ser plugado, por exemplo. Isso resolve o problema do desempenho, em certa medida e cria um framework que permite ao sistema de init continuar cuidando do sistema enquanto ele estiver ligado, iniciando e parando serviços conforme as condições do sistema mudam.
Há alguns poréns a respeito dessa estratégia, que quem tiver curiosidade achará com facilidade nos frequentes debates travados entre Lennart Poettering e os proponentes do upstart, mas eu queria chamar a atenção para um em particular: usando eventos não existe garantia de que um serviço esteja de fato em execução plena quando o seu procedimento de inicialização conhecido pelo upstart termina – anote isso mentalmente.
Finalmente, o upstart acaba não resolvendo os problemas de diagnóstico, nem de processos perdidos sendo deixados pelo sistema (embora uma solução que adota a ideia do systemd para isso esteja sendo criada), principalmente por as receitas de init usadas pelo upstart serem ainda scripts shell customizados.

Systemd

O systemd é criação de Lennart Poettering, que é funcionário da Red Hat e contribuidor do Fedora. Daí muita gente atribuir o systemd à Red Hat, no que se enganam: embora hoje a Red Hat tenha inúmeros projetos com o systemd no centro, a ideia original e o esforço original foram feitos por Lennart em seu próprio tempo e não como um projeto da empresa.
A motivação para sua criação é bem mais ambiciosa que a que originou o upstart: a ideia é que há inúmeras funcionalidades que o kernel Linux que são incríveis e extremamente avançadas e que acabam não sendo usadas a não ser em ambientes muito específicos, porque não há infra-estrutura comum no espaço de usuário para fazer uso da funcionalidade e disponibilizá-la para o resto do sistema.
Um exemplo: cgroups. Os Control Groups são formas de juntar processos em uma embalagem que permite tratar esse grupo de processos como um todo. Esses grupos podem ser usados por exemplo para limitar que partes do sistema os processos que o compõe podem ver, criando uma visão virtual mais limitada do sistema de arquivos, por exemplo, ou limitando as interfaces de rede que os processos podem ver. Também podem ser usados para tornar o agendamento de tempo das tarefas no processador mais adequado ao sistema, ou impor limites de uso de memória, de operações de I/O e assim por diante.
Uma das primeiras funcionalidades trazidas pelo systemd foi exatamente o uso de Control Groups. Cada serviço iniciado pelo systemd o é dentro de um cgroup próprio, o que significa por exemplo que todos os processos criados por aquele serviço podem ser – com certeza – terminados quando o serviço é parado. Também significa que apesar de seu apache estar consumindo muito CPU ou memória, seu servidor SSH ainda tem um naco suficiente do tempo de CPU para aceitar sua conexão que será usada para tratar do problema, já que o Linux tenta ser justo com os diferentes cgroups.
Além de tudo isso, o uso de cgroups ainda dá ao systemd a capacidade de garantir que não fiquem processos pra trás quando um serviço é parada: como processos criados por processos que foram colocados num control group também são colocados nesse control group, basta terminar todos os processos do control group para que não sobre nenhum. O pessoal do upstart pensou em adotar essa ideia.
Mas o systemd também atacou o problema do desempenho, de duas formas: a primeira foi a ideia de remover completamente scripts shell da jogada. Cada serviço é especificados num arquivo de configuração chamado “unit”, que descreve todas as informações que o init precisa para iniciar e cuidar do processo. Isso retira da equação a necessidade de iniciar um interpretador shell e de executar os complexos scripts usados atualmente com system v.
A segunda foi dando uma solução um pouco mais inusitada à questão das dependências. A ideia é simples: a maioria dos serviços abre sockets de algum tipo para receber pedidos de seus clientes. O X, por exemplo, cria um socket UNIX em forma de arquivo no /tmp. O que o systemd faz é criar todos os sockets dos serviços que controla (eles estão descritos nos arquivos unit) e, quando recebe uma conexão, inicia o serviço e passa pra ele o socket.
O interessante dessa solução é que ela 1) estabelece a relação de dependência na vida real: o serviço é iniciado quando alguém pede e 2) garante que os clientes que forem iniciados vão ter seu primeiro request atendido, não há mais o problema de não saber quando o serviço está plenamente ativo (lembra da nota mental no caso do upstart?).
Para completar, o systemd controla as saídas padrão e de erro dos serviços que inicia e pode facilmente mostrar as últimas linhas de log de um serviço quando sua execução falha, tornando muito simples o diagnóstico:
kov@melancia ~> systemctl status mariadb.service
mariadb.service - MariaDB database server
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; disabled)
Active: active (running) since Seg 2014-02-03 23:00:59 BRST; 1 weeks 3 days ago
Process: 6461 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID (code=exited, status=0/SUCCESS)
Process: 6431 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n (code=exited, status=0/SUCCESS)
Main PID: 6460 (mysqld_safe)
CGroup: /system.slice/mariadb.service
        ├─6460 /bin/sh /usr/bin/mysqld_safe --basedir=/usr
        └─6650 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plug...
	Fev 03 23:00:57 melancia mysqld_safe[6460]: 140203 23:00:57 mysqld_safe Logging to /var/log/mariadb/mariadb.log.
	Fev 03 23:00:57 melancia mysqld_safe[6460]: 140203 23:00:57 mysqld_safe Starting mysqld daemon with databas...mysql
	Fev 03 23:00:59 melancia systemd[1]: Started MariaDB database server.
	Hint: Some lines were ellipsized, use -l to show in full.

A questão de licenciamento

De extrema importância quando falamos de Software Livre são a questão de licença e de copyright do código. O systemd é licenciado sob a LGPL versão 2.1 ou superior, o que permite que programas proprietários sejam linkados com suas bibliotecas. O copyright das contribuições feitas ao código é mantido pelos colaboradores. Um projeto de software livre bastante comum desse ponto de vista.
Já o Upstart, sendo criação da Canonical segue o que tem sido sua política há algum tempo: usa uma licença GPL (versão 2) e exige que contribuidores assinem um Contributor License Agreement compartilhando os direitos de copyright com a Canonical. Com isso, a Canonical fica sendo a única dona do copyright do conjunto e pode fazer por exemplo alterações de licença, ou lançar uma versão proprietária se entender que deve.
Em primeira análise isso não parece ser um problema: de fato, há várias licenças mais permissivas usadas por outros softwares, como as licenças BSD, que permitem às pessoas fecharem o código. A grande questão aqui é que com esse modelo não é qualquer pessoa que pode fazer isso, só a Canonical pode. Essa acaba sendo uma forma de desnivelar as oportunidades artificialmente, criando um monopólio do privilégio de lançar versões proprietárias.

Futuro

Atualmente dois sistemas importantes usam upstart: Ubuntu e Red Hat Enterprise Linux 6, que é a versão atual suportada da Red Hat. O systemd foi adotado inicialmente pelo Fedora, o que significa que a futura RHEL 7 também estará usando systemd. Algumas outras distribuições passaram a usar systemd ou como padrão ou como opção bem suportada, como o SuSE e o Arch Linux, por exemplo.
Com a adoção dele pelo Debian como padrão, todas as grandes distribuições base passarão a ser baseadas no systemd. Para surpresa de muitos (minha inclusive), Mark Shuttleworth anunciou que dada a decisão do projeto Debian, o Ubuntu também passaria a adotar systemd por padrão, aceitando graciosamente a derrota, em suas próprias palavras.
Com essa mudança é muito improvável que o upstart tenha um futuro depois de sua aposentadoria nas versões estáveis do RHEL e Ubuntu LTS, a menos que alguma outra distribuição se apresente para assumir sua manutenção.
Vida longa ao systemd!

 

Nota: ao tratar do CLA exigido pela Canonical o texto anteriormente usava a expressão “transferindo o copyright” , que dá a entender que o autor original da contribuição perdia os direitos, o que não é verdade e foi corrigido.

WebKitGTK+ hackfest 5.0 (2013)!

For the fifth year in a row the fearless WebKitGTK+ hackers have gathered in A Coruña to bring GNOME and the web closer. Igalia has organized and hosted it as usual, welcoming a record 30 people to its office. The GNOME foundation has sponsored my trip allowing me to fly the cool 18 seats propeller airplane from Lisbon to A Coruña, which is a nice adventure, and have pulpo a feira for dinner, which I simply love! That in addition to enjoying the company of so many great hackers.

Web with wider tabs and the new prefs dialog
Web with wider tabs and the new prefs dialog

The goals for the hackfest have been ambitious, as usual, but we made good headway on them. Web the browser (AKA Epiphany) has seen a ton of little improvements, with Carlos splitting the shell search provider to a separate binary, which allowed us to remove some hacks from the session management code from the browser. It also makes testing changes to Web more convenient again. Jon McCan has been pounding at Web’s UI making it more sleek, with tabs that expand to make better use of available horizontal space in the tab bar, new dialogs for preferences, cookies and password handling. I have made my tiny contribution by making it not keep tabs that were created just for what turned out to be a download around. For this last day of hackfest I plan to also fix an issue with text encoding detection and help track down a hang that happens upon page load.

Martin Robinson and Dan Winship hack
Martin Robinson and Dan Winship hack

Martin Robinson and myself have as usual dived into the more disgusting and wide-reaching maintainership tasks that we have lots of trouble pushing forward on our day-to-day lives. Porting our build system to CMake has been one of these long-term goals, not because we love CMake (we don’t) or because we hate autotools (we do), but because it should make people’s lives easier when adding new files to the build, and should also make our build less hacky and quicker – it is sad to see how slow our build can be when compared to something like Chromium, and we think a big part of the problem lies on how complex and dumb autotools and make can be. We have picked up a few of our old branches, brought them up-to-date and landed, which now lets us build the main WebKit2GTK+ library through cmake in trunk. This is an important first step, but there’s plenty to do.

Hackers take advantage of the icecream network for faster builds
Hackers take advantage of the icecream network for faster builds

Under the hood, Dan Winship has been pushing HTTP2 support for libsoup forward, with a dead-tree version of the spec by his side. He is refactoring libsoup internals to accomodate the new code paths. Still on the HTTP front, I have been updating soup’s MIME type sniffing support to match the newest living specification, which includes specification for several new types and a new security feature introduced by Internet Explorer and later adopted by other browsers. The huge task of preparing the ground for a one process per tab (or other kinds of process separation, this will still be topic for discussion for a while) has been pushed forward by several hackers, with Carlos Garcia and Andy Wingo leading the charge.

Jon and Guillaume battling code
Jon and Guillaume battling code

Other than that I have been putting in some more work on improving the integration of the new Web Inspector with WebKitGTK+. Carlos has reviewed the patch to allow attaching the inspector to the right side of the window, but we have decided to split it in two, one providing the functionality and one the API that will allow browsers to customize how that is done. There’s a lot of work to be done here, I plan to land at least this first patch durign the hackfest. I have also fought one more battle in the never-ending User-Agent sniffing war, in which we cannot win, it looks like.

Hackers chillin' at A Coruña
Hackers chillin’ at A Coruña

I am very happy to be here for the fifth year in a row, and I hope we will be meeting here for many more years to come! Thanks a lot to Igalia for sponsoring and hosting the hackfest, and to the GNOME foundation for making it possible for me to attend! See you in 2014!


II Encontro Nacional de Dados Abertos

Semana passada ficou curta, deixei de fazer um bocado de coisa que queria e apertei o passo em outras pra conseguir passar os dois últimos dias da semana em Brasília no II Encontro Nacional de Dados Abertos, organizado pela SLTI do Ministério do Planejamento e pelo escritório brasileiro da W3C. Valeu à pena!

Em 2010 eu comecei pra coçar uma coceira pessoal com um projeto para coletar e dar melhor visibilidade ao gasto dos parlamentares mineiros com as chamadas verbas indenizatórias. O resultado foi o softwae com nome-código “Montanha”, que era composto de duas partes: 1) um web scraper que entrava em cada uma das milhares de páginas que tinham os dados com uma grande granularidade e os colocava num banco de dados 2) uma aplicação web que exibia os dados em tabelas e gráficos para facilitar a leitura.

Mais pra frente resolvi renomear o projeto para ‘Olho Neles!’ pra ficar mais amigável e registrei um domínio* (http://olhoneles.org/). Em razão desse trabalho eu fui convidado para falar no primeiro ENDA, que aconteceu em 2011, mas infelizmente não deu certo a viagem para Brasília e eu acabei sendo representado pelo Marcelo “metal” Vieira, contribuidor do projeto, que por sorte estava por lá. Fui novamente convidado agora e dessa vez deu certo!

Eu apresentando o olhoneles
Eu apresentando o Olho Neles! foto da Raquel Camargo

A abertura foi muito boa e contou com o presidente da ESAF – Escola de Administração Fazendária, onde aconteceu o evento – que fez um primeiro discurso com um alto grau de auto-crítica governamental. Começou por falar que governo bom é governo que apanha de manhã, de tarde e de noite – o que ele acha que é essencial para manter a administração pública constantemente preocupada com a melhoria e para o quê os dados abertos tem muito a contribuir. Depois afirmou que a Receita Federal é uma instituição de excelência, que recolhe impostos muito bem, mas que falta muita qualidade no gasto dos recursos angariados por ela; “não podemos passar a conta das nossas ineficiências para a sociedade sempre”, disse ele, dando um tapa de luva no próprio governo.

Depois fomos para o evento. Eu confesso que inicialmente suspeitava que o evento seria uma série de palestras falando de projetos com lançamento no futuro, mas fui positivamente surpreso: muitas pessoas estava lá para mostrar suas iniciativas já em funcionamento, com dados já disponibilizados, até mesmo seguindo um princípio de não perseguir o ótimo que é inimigo do bom. Mais de uma apresentação teve um slide com uma variação de “publish early, publish often”, claramente inspirado no “release early, release often” d’A Catedral e o Bazaar, que é uma das pedras fundamentais do pensamento hacker. Alguns citaram o primeiro Encontro Nacional de Dados Abertos como marco inicial das suas iniciativas.

Outra coisa que eu senti é que a maioria das iniciativas não foi algo fortemente institucional – algum tipo de planejameto e definição estratégica sempre era citado, é verdade, e a adequação à Lei de Acesso à Informação certamente teve um papel importante, mas as iniciativas que iam além da mera compliance me deram a clara impessão de terem partido das áreas de tecnologia, com vontade fazer acontecer. Nunca é demais ser lembrado que no serviço público há muitas dessas pessoas que amam fazer acontecer, sou feliz de ter conhecido várias até hoje =).

E conhecer gente foi como sempre o ponto alto do evento pra mim. Apesar de eu não ter feito tudo que queria – acabei deixando de contribuir com a ideia de uma cryptoparty durante o evento, acabei ficando sem assistir à BR080 no final das contas e perdi alguns debates que eu achava importantes – não fiquei muito tempo sem estar com pessoas interessantes, com boas ideias e papo que acrescentou muito pra mim. Tive mais contato com o pessoal do ônibus hacker, do grupo transparência hacker, o que foi muito interessante e proveitoso pra mim.

A minha apresentação (slides em PDF, slides em ODP) correu sem maiores problemas, tirando que eu fiz a apresentação sem meus óculos que estão sem uma das asinhas – fiquei com medo de ninguém conseguir prestar atenção no que eu estava falando por ficar incomodado com os óculos tortos. Fiquei um pouco mais nervoso que o normal de não conseguir sentir o que as pessoas estavam pensando =P.

A recepção foi boa, fiquei feliz de saber que o representante da ALMG gostou de ter um projeto utilizando o trabalho deles de expor os dados através de web services (o que simplificou muito o código do coletor da ALMG no olho neles). Também fiquei feliz de saber que a Câmara Municipal de SP já tem web services para as verbas indenizatórias – taí uma tarefa fácil pra contribuir com o Olho Neles! ;)

Durante a apresentação do representante do Senado, que parece ter uma relação bem estabelecida com o grupo Transparência Hacker, inclusive, fiquei sabendo um pouco mais sobre LexML e sobre as ferramentas que tem sido usadas para ajudar a automatizar o processo legislativo, como o LexEdit, e torná-lo mais transparente. O Jean Ferri, que foi o moderador da sessão me deu alguns detalhes de como funciona a coisa toda e eu fiquei bem impressionado, não achei que estivesse tão avançado assim.

Enfim, fiquei muito feliz de ter sido convidado e de ter participado, muito obrigado à SLTI do Ministério do Planejamento e ao escritório brasileiro do W3C pela iniciativa, keep ‘em coming!

* .org sem .br porque nossas políticas de registro de domínio são blergh e exigem que você seja uma organização formalmente registrada para registrar org.br