It sometimes happens in academic-backed open-source tools that what in any other industry would be glaring security holes are for some reason not considered important or worrisome. Simple problems such as the ability to upload and execute any arbitrary file to a server are easy to fix yet related exploits can be found on any number of public websites in the biotech sector. I would never proclaim to be a security expert, but getting the basics right is within everyone's grasp.
Commonly it is the case that tool authors have either not considered that their software might be exposed to pubic access (or they are simply more interested in the science than considering how to keep the bad guys out), or they do not believe that their tool will be exposed to a level where it might become a target - i.e. security through obscurity. This position is becoming less and less easy to defend. Recent attacks by highly skilled hackers on seemingly pointless targets (e.g. Lulzsec vs. X-Factor) demonstrate that any public site is a target. In the world of biotechnology, with activists seeking targets to protest against genetically modified crops and animal testing activities, more and more institutes could find themselves in the line of fire. Big pharma are especially vulnerable on the occasion that bad publicity spreads about one of their drugs leading to dissatisfied customers seeking revenge for perceived damages, real or otherwise.
So what to do? Big pharma like to ensure that any third-party tools are installed in such a way that they are firewalled and authenticated for use by their own staff only. The generally high level of security found in leading authentication mechanisms is much harder to crack than the open-source tools that they are protecting. This way it doesn't matter so much if the open-source bioinformatics tool can be broken into - although it still does matter to a certain extent so the tools are generally placed in DMZs for safety's sake - but what is important is that systems are in place to stop the hacker getting anywhere near the more vulnerable components in the first place.
For institutes hosting public-service websites based around or including open-source bioinformatics tools, the only way forward is to work with the developers of those tools to fix the loopholes and plug the gaps. In an ideal world the open-source developers would not have made such mistakes in the first place, but given that many of these tools are developed by people who have not had a formal computer science background and are not necessarily aware of the full range of risks and problems associated with putting their code live on the internet, this could be an uphill struggle. It is largely up to deployers of these tools to scan them for vulnerabilities, make any relevant patches, then contribute those patches back to the project in question. They then just need to hope that the project management considers security a high enough priority to actually apply those patches and improve the overall quality and robustness of their code for the benefit of all their users.
Perhaps a basic course on web security should be a compulsory part of all bioinformatics qualifications?