Wednesday, August 22. 2012
Am 15.08. fand das erste Treffen der Symfony User Group Stuttgart (SFUGSTR) statt, an der sich über zwanzig Symfonyiacs aus Stuttgart und Umgebung im "Häusle" in einem Hinterhof der Christophstraße trafen. 100 days stellte dabei die Räumlichkeiten und sponserte das gesamte Event mit leckeren Baguettes und kühlen Getränken.
Nach einer kurzen Einleitung von Gaylord Aulke, Geschäftsführer der 100 days GmbH, gab Christoph Hautzinger in seinem einstündigen Talk "Symfony2 Admin Bundles" einen Überblick über die aktuellen Möglichkeiten, per Konfiguration Adminmasken in Symfony2 auf Datenbankentitäten zu erstellen. Die Slides hierzu sind auf Slideshare zu finden:
Eine anschließende kurze Vorstellungsrunde sollte herauskristallisiseren, was das für Menschen sind, die zu solch einem Usergrouptreffen gehen und wie deren Interessen und Erwartungen an die Usergroup aussehen. Es gab ein breites Spektrum: Symfony2 Entwickler, die tagtäglich damit arbeiten; Drupal Entwickler die sich für die Zukunft rüsten; Entwickler, die noch mit symfony1 arbeiten und sich "langsam" mit der Migration auf Symfony2 auseinandersetzen oder einfach nur Leute, die Symfony cool finden und versuchen das Vorzeigeframework in Ihren Firmen zu etablieren. Erfahrungsaustausch versprach sich fast jeder der Anwesenden.
Das Usergroup Treffen wurde allgemein sehr gut angenommen. Es ist geplant, das Event alle zwei Monate in lockerer Atmosphäre stattfinden zu lassen. Der nächste Termin wurde auf den 18.10. angesetzt. Weitere Infos gibt's auf MeetUp.
Wednesday, April 11. 2012
Recently we implemented some quite sophisticated caching mechanisms using varnish. In addition to just caching, backend errors and invalidated pages had to be handled gracefully whenever possible so we implemented grace mode, saint mode and backend probes.
Everything was working great during development but then we did some real performance testing... The results really gave us some headache because at some point we noticed the following in varnishlog from time to time:
11 FetchError c no backend connection
This error appears when (obviously) no backend is available/healthy. The problem: The health checks at that time reported that the backends were alive. Googling didn't help here as this error didn't seem to be one of the standard issues. So i finally found myself digging in the C code and after some while i found out what was going on.
The reason we got this error was a combination of a not-yet completely finished backend and the (poorly documented) saint mode:
We still had quite an amount of 500s returning from the backends because of database inconsistencies and general errors. Now the saint mode maintains a blacklist of URLs per backend. When searching for a backend to handle a request it first checks against this list to see if the URL is blacklisted for specific backends. Varnish will only request a backend that is not blacklisted for the given URL. This is documented.
The undocumented bit: In order to keep the blacklist compact, Varnish saint mode will blacklist a complete backend server silently (!) once a specific number of blacklisted URLs for this backend is exceeded.
If you get frequent 500s from different URLs of all your backends, all will be marked unhealthy over time, resulting in a "no backend connection" error for subsequent requests.
The quick fix was to raise the URL blacklist size (saintmode_threshold). Since this results in longer URL blacklists and thus more memory consumption and longer lookup time, this is not a sustainable solution for production systems. The real solution was to fix all the errors in the beackend.
Friday, March 16. 2012
After a lot of hard work, on march 10 a complete rebuild of wetter.com went online. The site is #1 or #2 in germany for weather forecasts since years with millions of visits/day. It was built completely new on PHP 5.4, Symfony 2, MySQL, SolR, Varnish and NginX. The task to re-build the whole site from scratch was given to our partner TFT (Tomorrow Focus Technologies) who invited 100 DAYS into the project to develop the symfony2 parts. This post describes the experiences we made when developing and launching this large scale application.
Continue reading "wetter.com - Relaunch with symfony2, Assetic, Varnish and Twig"
Friday, February 17. 2012
In 2007, the first Plat_Forms contest took place with support of Zend Technologies, University of Berlin, Heise Publishing Company and OSBF. It was a web development platform comparison like it had never been done before: 9 teams in controlled environment doing the same task in a limited time. During that time, the team of Prof. Lutz Prechelt collected data and after the contest, the results together with the data regarding the workflow of the individual teams was evaluated in a scientific way.
Back then, the PHP teams had outperformed Java and Perl in terms of development productivity and usability of the resulting applications. The results of this contests helped to position PHP in many large organisations because they proved common prejudices against PHP wrong: No, PHP is not insecure, no it is not slow, etc.
In 2011 a second contest was helt with 16 teams. Unfortunately the results were not that inspiring this time (as shown here some time ago). Now the third event of this kind is coming up:
Plat_Forms 2012 will take place on April 3rd and 4th, 2012 in Berlin, Germany.
Plat_Forms 2012 will focus on scalability and cloud computing. Unlike in 2007 and 2011, this year's teams will implement a highly scalable web service on Amazon Web Service infrastructure.
Again, multiple teams consisting of 3 persons each are invited for each web programming platform: PHP, Ruby, Perl, Java. They are searching for strong PHP teams with the will to compete.
PHP needs you! More information here.
Tuesday, January 31. 2012
Heise and the university of Berlin just announced the next iteration of the Plat_Forms programming contest. This time, the task all teams need to implement will be more "cloudy", not that frontend-heavy. Hopefully, PHP will perform better than in 2011 this time. If you have a team of 3 PHP developers and if you think you can compete with Ruby and Java: registration is open!
Thursday, January 5. 2012
In January 2011, the second iteration of the Plat_Forms contest had been conducted by FU Berlin Heise and OSBF. 16 Teams with 3 developers each were all given the same task. They had to implement as much as possible of the given requirements in a given time span in a controlled environment. After the event, the resulting code has been evaluated by the team around Prof. Lutz Prechelt and Ulrich Stärk. The results were then presented on November 25, 2011.
For the PHP Side, i was asked to select the participants. To provide a good coverage of current PHP best practices, i selected 2 zend framework teams, 1 symfony team and a flow3 team (as an additional influence besides the general purpose frameworks).
After the great success of PHP in the previous contest in 2007, we expected after all the improvements in the PHP space during the last years, that PHP would be even more successful than before. But we were supprised:
So how can this be interpreted?
Java was almost constant, slightly improved coverage. Perl decreased somewhat, that might be discussed elsewhere. Ruby was very good, but there was no numbers for 2007 so no trend can be derived here.
But what happened to PHP? Less coverage than 2007 and much less consistency between the teams (one quite good, the other 3 rather bad). Did we select the wrong teams? Did they have a bad day?
Why did all the work the PHP community did during the last years in terms of software architecture, frameworks and quality not lead to better productivity?
Ok, the results for robustness have improved compared to 2007 for PHP. But the size of PHP applications was less consistent than in 2007: Some PHP applications were quite compact, others were as big as the Java applications but while covering less functionality.
To me these results match my observations during the last years in PHP space. The idea of PHP being "super-productive" needs to be questioned at least. Maybe the PHP community is on a journey and the destination has not been reached yet. We will see with the results of the next iteration of this contest...
While Ruby Teams spent much more time writing automated tests than everybody else, they still got the highest functional coverage.
While in 2007 the PHP teams were most interested in customer's wished, in 2011, Ruby asked the most detail questions.
Saturday, January 22. 2011
The organizers around Prof. Lutz Prechelt and Eduard Heilmayr did a fantastic job to provide a great environment to work in. They will be evaluating the work of the different teams now and present their scientific results in a couple of months from now.
Some impressions of the contest can be seen in the contest blog.
Friday, November 5. 2010
In 2007 there was a programming competition conducted by the university of berlin and "Heise Verlag":
Different programming platforms for web applications (Java vs. PHP vs. Perl) should be compared and the result were analysed in many aspects. I took part as a member of the Zend team and had a lot of fun there.
Now Prof. Prechelt from FU Berlin is setting up another challenge. In January 2011, the contest will be repeated. With more platforms and even more fun this time. The official announcement can be seen here: http://www.plat-forms.org/platforms-announcement
We are searching for 3-4 teams from the PHP area to participate there.
If you feel you and your 3-person team belong to the top level PHP developers and if you want to take the challenge and compete against Java, .NET, Ruby, Python and Perl teams, feel free to apply now!
See you there
Thursday, August 28. 2008
MemcacheD is a very cool piece of software. When i did some optimization of a cluster based webapp lately, i was wondering how Memcache was speading my cache entries over the cluster. So i did some research for monitoring tools. A simple approach to monitoring would be to use Cacti to monitor status values like Cache usage, hits/sec etc. This can be done by a template like this. An alternative was provided by Harun Yayli: His memcache.php which somewhat resembles the APC status page and now is a part of PECL/memcache is easy to setup and works well. It also allows to dive into the data structures but you need to decide for one server first, then click into a "Slab" and then you see the keys. Inspired by this, i wrote a small script that fetches all data from a memcache cluster, gets all keys out of it and then sorts and displays them in a list. Yes it it ugly and yes: the memcacheD is not answering other requests while doing a cachedump. But i found it very useful. Maybe somebody has something more advanced that he wants to point me to? This could save a lot of time...
But here is my naive approach: Assumed you have a memcache object in PHP 5 with some servers registered using addServer Method, the following can be used:
$list = array();
I then added a very simple HTML Table output to list the result array and found out some interesting details about my application that i dont want to share here
It worked for me but it could be that for bigger Memcache Clusters (mine has only 10 servers) or big content tables this will not return the complete content. More to come...
Saturday, June 21. 2008
Sorry guys, flamebait again. But i need to say this: I HATE NOTIFICATION FREE PROGRAMMING in PHP!
$email = (key_exists('email', $values)) ? $values['email'] : null;
Honestly: This does not make any sense at all. It just pollutes your code with technical constructs that don't contribute to a solution.
UPDATE: Just to make this clear: I am _not_ talking about not input filtering here! Of course every input needs to be whitelist-filtered. And every ouput escaped to prevent XSS. What i mean is internals of you script: You can count on PHP to do things for you. This is exact because PHP does it in exact the same way everytime the script is executed.
Sunday, May 18. 2008
Everyone working with PHP is probably asked quite often what the difference is between PHP and Java (or C#/.Net). Besides the usual aspects: scripting vs. compiled and in-process vs. seperate process and Multithreading etc., i think there are some "soft facts" that might be even more important for commercial software development.
For example: In Java (or C#) you can code any architecture for any Problem. Some will need more development time and more hardware, others less. In PHP you still have a lot of ways to solve a problem but not as many as in other languages because of the limits to object lifetime. Some of the possible solutions will lead to a dead-end road in terms of performance and maintainability. Therefore, intelligent PHP developers tend to communicate first. They search for proven solutions, maybe existing PHP-Extensions and working code before defining an architecture (and i dont mean Design Patterns, but examples of working solutions). Maybe it even helps in this case that quite some PHP developers don't have a solid computer science background and therefore they need this kind of inspiration to find a solution at all. On the other side, people tend to be proud if they have something working in PHP thus blog about it and opensource their architecture and code.
Of course you find all this also in Java or C#, but much much more in the PHP area. It seems like the "Not-Invented-Here" syndrome is less mighty here or people are more motivated to share.
Anyway, the result is: Even though there are not so many standards in the PHP world, successful developers have a common understanding about the do's and don'ts in PHP. This unwritten standardization leads to a very interesting fact (also shown in the Plat-Forms contest): PHP solutions of different teams (of comparable skill level) are much more consistent than in Java (or probably C# (Prof. Prechelt, please forgive me this un-scientific deduction)). Meaning: PHP appears more predictable than other programming languages. Now this is a fact that the business decision takers might be interested in...
Tuesday, April 15. 2008
In typical PHP projects, people are afraid of spending more hours in development that actually planned. As a main reason for this exceeding the estimated (thus paid) budget of hours, usually "technical problems" are given. But the experience (and some empirical studies) show that this is not really true: Especially for bigger projects (>100 person days) a much bigger danger lies in understanding the requirements right. Of course we work iteratively and of course we re-estimate each feature right before we start implementig it. But our customers have internal processes and thus they need to scope the project very detailed upfront. So we need to tell them what we will deliver and what cost are involved with it. Actually, what we need is an adapted form of planning game as described in agile methods: We need to gather all functional requirements from an unstructured document and we need to roughly estimate on a basis of these requirements in order to define features and scope the project right.
There are a lot of books about classical requirements management for the usual waterfall project model. In the PHP world, things are more dynamic and change more often. We have solved this problem for a quite big project currently. Our PHP-minded approach looks like this:
1. We get a functional requirements document from the customer. This is his "wishlist"
2. We derive the actual requirements from this document. This is important because the customer tends to distribute similar requirements via the whole specification and thus, the structure of his document is usually not suitable for our project organisation.
We list the requirements in an excel sheet that has the following columns:
- Chapter in the original document of the customer
While we isolate requirements, we immediately associate them with features. If no feature had been defined before that matches the new requirement, we add a new one. In excel that is easy: just type a new feature name, the completion feature of excel helps to aviod mis-spelling when referring to a previously defined feature. Feature numbers we assign in a separate list and also write to our requirements list.
We put every requirement we find into the list. Even if it is double (i.e. mentioned in a different chapter). This way, we build a relation between chapters in the requirements document and our feature numbers.
3. Now we sort the list by feature. After that we see all the requirements that were redundant. We keep them in the list, but reduce the estimated hours if the redundant ones to 0. This way we keep the relation but avoid doubling the estimated cost.
4. My new favorite feature in Excel: Pivot Tables. I am not a microsoft fan, but this is really cool: Excel can aggregate all estimated hours per feature in such a table. The result is a list if disctinct features with the associated estimated days.
5. This table we can then discuss with the customer. We know where the effort comes from and we know where in the specification document the requirements for this were hidden. The customer might then say: "Let us talk about this feature, i don't want to spend so much time on it". We can then look into the requirements list for this feature and check what the biggest points are and if we can reduce effort here.
6. During this discussion, we add another 2 Columns to our excel sheet: New Estimation and New Comment. After (or better: during) the discussion with the customer, we fill in the newly estimated person days and the comment, what was changed towards the old estimation in these columns. Then we define another Column in the Pivot Table that aggregates this new estimations per Feature for us.
7. Finally we write down the features along with the references to the specification document, the new estimated time and the new comments (if any) in a technical specification document, add some design and architecture information and we have our offer.
We tried quite a lot of approaches in projects of different sizes but this seemed to be the best way so far for us to deal with big specification documents the PHP way. Maybe someday if i have a lot of time, i will write a database driven PHP app replacing the Excel sheet. But so far, it works great this way
Tuesday, March 25. 2008
The Zend Download Server (ZDS) (Part of the Zend Platform product) takes over long running download processes from the apache/php running the actual web application. The download is then handled by a very lighweight process, saving resources. This is very similar to X-LIGHTTPD-send-file of lighty or its apache-port mod_xsendfile (while this only does half of the job). The advantage of ZDS is that it can also send strings and streams that do not nessecarily reside as files on any harddrive. It is as easy as calling:
So much the theory. There are three issues to consider though:
(Page 1 of 1, totaling 13 entries)