How to Monitor your CentOS 7 Server using Cacti   
Cacti is a free and open source network graphing solution. It uses RRDTool for data gathering and graphing. It provides many features such as remote and local data collectors, network discovery, device management automation, graph templating etc. In this tutorial, we will install Cacti on CentOS 7 server.
          How to Install Joomla with Apache on Debian 9 (Stretch)   
Joomla is one of the most popular and widely supported open source content management system (CMS) platform in the world that can be used to build, organize, manage and publish content for websites, blogs, Intranets and mobile applications. Thie tutorial describes the installation of Joomla with Apache web server and MariaDB on Debian 9.
          Hackaday Prize Entry: Gaming Done Tiny with Keymu   

The world’s tiniest Game Boy Color, introduced at the 2016 Hackaday SuperConference, is a work of art. This microscopic game console inspired [c.invent] to create how own gaming handheld. His Keymu project on describes an open source, keychain-sized gaming handheld that its builder claims is really the world’s tiniest. How did he make it smaller? It’s a miniature Game Boy Advance SP, and it folds up in a handy clamshell case.

While he’s a Pi fan, [c.invent] felt the Pi Zero was too big and clunky for what he had in mind–a keychain-sized handheld. Only the Intel Edison was …read more

          Open source express: Cloud Foundry, C-Radar, Smile, Alfresco   
En bref: Microsoft et Orange rejoignent la fondation Cloud Foundry, l'ex-Data Publica acquis par Sidetrade, Smile reprend l'agence Hypertexte, nomination chez Alfresco.
          Kickstarter success RuuviTag is now available, first batch almost gone overnight   

Kickstarter success RuuviTag is now available, first batch almost gone overnight

A Finnish startup by the name Ruuvi introduced its Bluetooth beacon project to the internet in a Kickstarter campaign last summer. In mere hours the campaign was funded and in the end it crushed its goal with pre-orders exceeding $170 000.

Now the company has officially released the product, which according to their press release came quicker than expected. Since yesterday's launch RuuviTag has been selling like hotcakes. The founder of Ruuvi, Lauri Jämsä, told us that the demand has been so high that the supply has almost been depleted.

In the upcoming batches the company aims more and more in the B2B markets where orders might exceed a thousand units.

RuuviTag is a small Bluetooth 4.2 beacon that has radio and NFC connectivity as well as a multitude of sensors that can measure temperature, relative air humidity, air pressure, and acceleration. It can be used as a Eddystone or iBeacon, and the RuuviLab community has already created a variety of applications, including the likes of portable weather station and vehicle locator. Of course because it's open source platform anyone can join in.

You can buy RuuviTag's from the company's website where they sell for 69 euros (approx. $79) per 3 beacons plus shipping.

Permalink | Comments

          How Does Percona Software Compare: the Value of Percona Software and Services   
Percona Software and ServicesIn this blog post, I’ll discuss my experience as a Solutions Engineer in explaining the value of Percona software and services to customers. The inspiration for this blog post was a recent conversation with a prospective client. They were exploring open source software solutions and professional services for the first time. This is a fairly common […]
          RE[3]: ARX and other thoughts   
Linux is losing market share! Apple Users are homos! Windows XP crashes all the time! MSWord is a standard! Amiga users are in denial! OSnews is biased towards Apple/Ubunutu/Microsoft! Vista is just XP-SP3/a new skin! Ubuntu runs fine on 128MB! Firefox leaks memory! Novell are the new Microsoft! It's not open source, I'm not interested! EVERYTHING must be GPL! Will that do? :3
          RE[6]: ARX and other thoughts   
Buddy, we're fast approaching the tenth anniversary of the Year of My Linux Desktop. As for that other OS, or any other, it won't be the Year of My Other OS Desktop until I can get decent hardware support, stability, decent software support, and DRM-, bloat- and FUD-less operation, all in the same package. Open source would be nice, too.
          RE[7]: ARX and other thoughts   
Yeah, open source would be great. My grandmother can't wait to compile the latest kernel...
          RE[8]: ARX and other thoughts   
The open source OpenOffice and Firefox packages (which are also available for Windows) don't have kernels, let alone ones you have to compile. And you don't have to compile the kernel on most distros anyway. It's FUD like that which prevents Linux users taking Windows users seriously. The sad part is that although my memory may be faulty on the point, I seem to remember that tomcat wasn't always so full of anti-FOSS/Linux crap.
          AT&T to launch software-based 10G XGS-PON trial   
AT&T announced it will conduct a 10 Gbit/s XGS-PON field trial in late 2017 as it progresses with plans to virtualise access functions within the last mile network.

The next-generation PON trial is designed to deliver multi-gigabit Internet speeds to consumer and business customers, and to enable all services, including 5G wireless infrastructure, to be converged onto a single network.

AT&T noted that XGS-PON is a fixed wavelength symmetrical 10 Gbit/s passive optic network technology that can coexist with the current GPON technology. The technology can provide 4x the downstream bandwidth of the existing system, and is as cost-effective to deploy as GPON. As part of its network virtualisation initiative, AT&T plans to place some XGS-PON in the cloud with software leveraging open hardware and software designs to speed development.
AT&T has worked with ON.Lab to develop and test ONOS (Open Network Operating System) and VOLTHA (Virtual Optical Line Terminator Hardware Abstraction) software. This technology allows the lower level details of the silicon to be hidden. AT&T stated that it has also submitted a number of open white box XGS OLT designs to the Open Compute Project (OCP) and is currently working with the project to gain approval for the solutions.

The company noted that interoperability is a key element of its Open Access strategy, and prompted the creation of an OpenOMCI specification, which provides an interoperable interface between the OLT and the home devices. This specification, which forms a key part of software-defined network (SDN) and network function virtualisation (NFV), has been distributed to standards and open source communities.

  • AT&T joined OCP in January 2016 to support its network transformation program. Earlier this year at the OCP Summit Edgecore Networks, a provider of open networking solutions and a subsidiary of Accton Technology, announced design contributions to OCP including a 25 Gigabit Ethernet top-of-rack switch and high-density 100 Gigabit Ethernet spine switch. The company also showcased new open hardware platforms.
  • At the summit, Edgecore displayed a disaggregated virtual OLT for PON deployment at up to 10 Gbit/ based on the AT&T Open XGS-PON 1 RU OLT specification that was contributed to the OCP Telco working group.
  • Edgecore's ASFvOLT16 disaggregated virtual OLT is based on the AT&T Open XGS-PON 1 RU OLT specification and features Broadcom StrataDNX switch and PON MAC SOC silicon, offering 16 ports of XGS-PON or NG-PON2, with 4 x QSFP28 ports and designed for next generation PON deployments and R-CORD telecom infrastructure.

          Cavium and China Unicom trial 5G user cases on M-CORD   
Cavium, a provider of semiconductor products for enterprise, data centre, wired and wireless networking, and China Unicom announced a targeted program for the testing of 5G use cases on a M-CORD SDN/NFV platform leveraging Cavium's silicon-based white box hardware in M-CORD racks populated with ThunderX ARM-based data centre COTS servers and XPliant programmable SDN Ethernet-based white box switches.

Under the program, China Unicom and Cavium plan to shortly commence trials in a number of locations across mainland China to explore the potential of the new service.

Cavium and China Unicom are specifically demonstrating multi-access edge computing (MEC) use cases developed through a previously announced collaboration based on the ON.Lab M-CORD (Mobile Central Office Re-architected as a data centre) SDN/NFV platform at the Mobile World Congress (MWC) Shanghai.

The demonstration involves a M-CORD SDN/NFV software platform and hardware rack integrated with virtualised and disaggregated mobile infrastructure elements from the edge of the RAN to distributed mobile core and the ONOS and XOS SDN and orchestration software.

The companies stated that this architecture is designed to enable turnkey operation in any central office or edge data centre for a full NFV C-RAN deployment. The solution is based on a Cavium-powered rack that combines the ThunderX ARM based data centre servers with the programmable XPliant Ethernet leaf and spine SDN switches to provide a full platform for M-CORD.

Regarding the latest project, Raj Singh, VP and GM of the network and communication group at Cavium, said, "Cavium is collaborating with China Unicom to explore 5G target use cases leveraging the M-CORD SDN/NFV platform and working towards field deployment… a homogenous hardware architecture optimised for NFV and 5G is a pre-requisite for field deployments".

  • Earlier this year, Radisys and China Unicom announced they had partnered to build and integrate M-CORD development PODs featuring open source software. For the project Radisys, acting as systems integrator, used the CORD open reference implementation to enable cloud agility and improved economics in China Unicom's network. The companies also planned to develop deployment scenarios for the solution in the China Unicom network.
  • The resulting platform was intended to support future 5G services by enabling mobile edge services, virtualised RAN and virtualised EPC. The companies also planned to develop an open reference implementations of a virtualised RAN and next-generation mobile core architecture.

          MPLS-TP OpenFlow extensions approved   
ZTE announced that the 'MPLS-TP OpenFlow Protocol Extensions for SPTN' (ONF TS-029) technical document proposed by China Mobile has become a formal standard of the ONF (Open Networking Foundation) after receiving unanimous approval from the forum's board of directors.

The release of MPLS-TP OpenFlow Protocol Extensions for SPTN is intended to provide a foundation for interworking between devices from heterogeneous vendors, and between devices and controllers. ZTE noted that China Mobile's large-scale deployment of software-defined packet transport network (SPTN) devices provides an example for other operators, while five operators are believed to be planning to implement the standard in the near future.

ZTE stated that packet transport network (PTN) technology features separate forwarding/control and a centralised management architecture, while OpenFlow offers an open protocol that performs programmable control for flow tables on the forwarding plane. In addition, an abstract adaptation layer supporting OpenFlow to encapsulate the existing forwarding functions of PTN is intended to provide an efficient means of enabling PTN devices with open and software-defined features.

Additionally, this design is expected to facilitate the commercialisation of PTN devices supporting SDN and thereby accelerate the development of the SPTN supply chain.

It was noted that China Mobile has a longstanding commitment to SPTN technology, in mid-2015, working with ZTE, Broadcom and Microsemi, establishing a discussion group within the ONF to research device specifications for SPTN based on OpenFlow and table type pattern (TTP).

In November 2015, a first ONF draft was proposed based on SPTN TTP that extended flow tables, group tables and related fields supporting MPLS-TP, expanded the OF-Config protocol to support QOS, OAM, protection and alarm performance configuration, and leveraged local OAM processing units to ensure a 50 ms protection switching time.

In tandem with the draft specification, China Mobile also organised lab tests for SPTN devices complying with the specifications and amended the document in accordance with the test results. The draft document was subsequently passed for review by experts from a number of ONF technical groups and adopted as a formal standard.

ZTE stated that MPLS-TP OpenFlow Protocol Extensions for SPTN standard is supported by the SPTN industrial supply chain, including chip manufacturers Broadcom, Microsemi, Centec and Marvell, equipment providers ZTE, Ericsson, Fiberhome, Raisecom, Greenwell, Chuling and Huahuan, instrument manufacturer Spirent and open source software Open Daylight and ONOS.

To date it is estimated that more than 50 operators have deployed MPLS-TP-based PTN devices at scale, including China Mobile, which purchased around 590,000 group customer devices compliant with the SPTN TTP standard in 2016. In addition, six equipment vendors have worked with China Mobile to deploy the networks.

          Twenty-nine; life as a 28-year-old   
Today I turned twenty-nine years old! This is fantastic. I’m making good progress towards my goal of becoming a little old lady living an awesome life. =) Here’s the bird’s-eye view, with links to annual reviews whenever I remembered to write them: 19 years old: Finished university, got into open source development 20 years old: […]
          OSS Leftovers   
  • AT&T Passive Optical Network Trial to Test Open Source Software

    AT&T is set trial 10-gigabit symmetric passive optical network technology (XGS-PON), tapping its growing virtualization and software expertise to drive down the cost of next-generation PON deployments.

    The carrier said it plans to later this year conduct the XGS-PON trial as part of its plan to virtualize access functions within the last mile network. Testing is expected to show support for multi-gigabit per second Internet speeds and allow for merging of services onto a single network. Services to be supported include broadband and backhaul of wired and 5G wireless services.

  • Intel Begins Prepping Cannonlake Support For Coreboot

    The initial commit happening this morning that is mostly structuring for the Cannonlake SoC enablement and some boilerplate work while more of the enablement is still happening. Landing past that so far is the UART initialization while more Cannonlake code still has yet to land -- this is the first of any post-Kabylake code in Coreboot.

  • Windstream Formally Embraces Open Source

    Windstream is dipping its toe into the open source waters, joining the Open Network Automation Project (ONAP), its first active engagement in open source. (See Windstream Joins ONAP.)

    The announcement could be the sign of broader engagement by US service providers in the open source effort. At Light Reading's Big Communications Event in May, a CenturyLink speaker said his company is also looking closely at ONAP. (See Beyond MANO: The Long Walk to Network Automation.)

    Windstream has been informally monitoring multiple open source efforts and supporting the concept of open source for some time now, says Jeff Brown, director of product management and product marketing at Windstream. The move to more actively engage in orchestration through ONAP was driven by the growing influence of Windstream's IT department in its transition to software-defined networking, he notes.

  • Why Do Open Source Projects Fork?

    Open source software (OSS) projects start with the intention of creating technology that can be used for the greater good of the technical, or global, community. As a project grows and matures, it can reach a point where the goals of or perspectives on the project diverge. At times like this, project participants start thinking about a fork.

    Forking an OSS project often begins as an altruistic endeavor, where members of a community seek out a different path to improve upon the project. But the irony of it is that forking is kind of like the OSS equivalent of the third rail in the subway: You really don’t want to touch it if you can help it.

  • Mozilla Employee Denied Entry to the United States [iophk: "says a lot about new Sweden"]

    Daniel Stenberg, an employee at Mozilla and the author of the command-line tool curl, was not allowed to board his flight to the meeting from Sweden—despite the fact that he’d previously obtained a visa waiver allowing him to travel to the US.

    Stenberg was unable to check in for his flight, and was notified at the airport ticket counter that his entry to the US had been denied.

  • Print your own aquaponics garden with this open source urban farming system

    Aquapioneers has developed what it calls the world's first open source aquaponics kit in a bid to reconnect urban dwellers with the production of their food.

  • The story behind Kiwix, an offline content provider

    Kiwix powers offline Wikipedia and provides other content to users, like all of the Stack Exchange websites.

  • Systemd flaw leaves many Linux distros open to attack

          Events: openSUSE.Asia Summit 2017, Diversity Empowerment Summit North America, Technoshamanism in Aarhus, ELC Europe   
  • openSUSE.Asia Summit 2017 Tokyo, Japan

    It is our great pleasure to announce that openSUSE.Asia Summit 2017 will take place at the University of Electro Communications, Tokyo, Japan on October 21 and 22.

    openSUSE.Asia Summit is one of the great events for openSUSE community (i.e., both contributors, and users) in Asia. Those who usually communicate online can get together from all over the world, talk face to face, and have fun. Members of the community will share their most recent knowledge, experiences, and learn FLOSS technologies surrounding openSUSE.

    This event at Tokyo is the fourth in openSUSE.Asia Summit. Following the first Asia Summit in Beijing 2014, the Asia Summit has been held annually. The second summit was at Taipei in Taiwan, then in Yogyakarta in Indonesia last year. The past Asia Summits have had participants from China, Taiwan, India, Indonesia, Japan, and Germany.

  • Program Announced for The Linux Foundation Diversity Empowerment Summit North America

    The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the program for the Diversity Empowerment Summit North America, taking place September 14 in Los Angeles, CA as part of Open Source Summit North America. The goal of the summit is to help promote and facilitate an increase in diversity, inclusion, empowerment and social innovation in the open source community, and to provide a venue for discussion and collaboration.

  • Technoshamanism in Aarhus – rethink ancestrality and technology
  • Last Chance to Submit Your Talk for Open Source Summit and ELC Europe

          WPWeekly Episode 271 – Recapping WordCamp Chicago 2017 With John James Jacoby   
In this episode, I’m joined by John James Jacoby. We recap WordCamp Chicago 2017 and learn about what he’s been up to as of late. Jacoby was recently elected as a trustee by the Village of East Troy, WI. We discussed what lessons he’s learned through open source software development (more...)
          New Java Magazine Edition about UI Tools    

From Chief Editor of Java Magazine Andrew Binstock

Coding UIs used to be an awful chore, with endless minute adjustments having to be constantly recorded. Fortunately, JavaFX greatly facilitated UI construction by scripting it with FXML, which is discussed in our first article. Our second article explores the drag-and-drop design tool, Scene Builder, which can generate FXML. Scene Builder was originally an Oracle tool that was released to open source and taken over by Gluon, which has been maintaining it ever since.

Front ends to web applications have their own unique needs, and we cover those too in a pair of articles: one on MVC 1.0, a web framework that at one time was considered for inclusion in Java EE 8, and another on a JavaScript toolkit, Oracle JET, which provides among many resources a large palette of useful controls with easy ways to wire them together.

If UIs are not your favorite topic, we have other subjects of interest: a detailed discussion of using MQTT, one the main messaging protocols in IoT. You'll also find an interesting dive into how the up-and-coming build tool Gradle uses libraries. And finally, we revisit a topic we've covered before: Compact Profiles in Java 8. In addition, of course, we offer our usual quiz —this time with the inclusion of questions from the entry-level exam—our review, and other goodness, such as readers' views on whether to include JavaScript in future issues. 

Previous edition was about Java and JVM tools 

          Congratulations New Java Champion Bob Paulin   

Welcome New Java Champion Bob Paulin 

Bob Paulin is an independent consultant working for different IT firms. He has 15 years of experience as a developer and has contributed to open source software for the past 10 years.

Bob is currently an ASF member and actively contributes to Apache Tikka, Apache Felix, and Apache Sling. He was nominated as JCP Outstanding Adopt-a-JSR participant for his involvement with Java EE 8. He has run numerous JDK 9 workshops in the Chicago area. 

Bob is the co-host the, a podcast on a range of Java topics, standards, tools, and techniques. He also participates regularly in the Java Off-Heap, a podcast about Java technology news. 

Bob has run the Devoxx4Kids and GotoJr conferences in Chicago allowing kids to hack in Minecraft, play with Lego robots, and use conductive play-doh. These efforts have enriched the lives of students and are helping inspire students to pursue technical careers. Follow him on Twitter @bobpaulin

Java Champions are an exclusive group of passionate Java technology and community leaders who are community-nominated and selected under a project sponsored by Oracle. Learn more about Java Champions

          New Java Champions: Emmanuel Bernard, Chris Newland and Bert Jan Schrijver   

Welcome three new Java Champions: Emmanuel Bernard, Chris Newland and Bert Jan Schrijver


Emmanuel Bernard is a data platform architect at JBoss. He has been a contributor and lead of open source projects for over 15 years and has led the Hibernate portfolio since 2008. He also contributed to JPA specs as JCP EG member and the Bean Validation spec as JSR Spec Lead. 

He is the author of Hibernate Search in Action, a reference guide for Hibernate Search. Aside from speaking at Java conferences around the world, he runs two podcasts, JBoss Community Asylum, and the French podcast, “Les Cast Codeurs”.

Chris Newland is a senior developer and team lead at ADVFN using Java to process stock market data feeds in real time. In his spare time, he invented and still leads developers on the JITWatch project, an open source log analyzer to visualize and inspect Just-In-Time compilation decisions made by the HotSpot JVM.

Chris is also a JavaFX developer and built a performance benchmark called DemoFX.  He set up a community OpenJFX backend at for cross-platform builds so that OpenJFX can be used with ARM JDKs and OpenJDK-based builds from other vendors such as Azul Systems' Zulu JDK.


Bert Jan Schrijver is a senior developer at JPoint. A tireless community organizer, he organizes J-Fall, the biggest Java conference in the Netherlands as well as the IoT Tech Days, a one-day conference on smart technologies. He is the chief editor of Java Magazine in the Netherlands, which has 4,000 subscribers. 

 Bert Jan worked with the JCP to have NLJUG join the Adopt-a-JSR program. The NLJUG was nominated as ‘outstanding adopt-a-jar participant’ for the JCP awards this year. Just in the past year, Bert Jan helped the Utrecht and Amsterdam JUGs start up their user groups. Bert Jan is always eager to share experiences with other JUG leaders and he helps out at Devoxx4Kids workshop to teach kids how to code. 

The Java Champions are an exclusive group of passionate Java technology and community leaders who are community-nominated and selected under a project sponsored by Oracle. Learn more about Java Champions 

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 15.0px Calibri; -webkit-text-stroke: #000000} span.s1 {font-kerning: none}
          Congratulations New Java Champion Oliver Gierke   

Welcome New Java Champion Oliver Gierke

Oliver Gierke is leading the Spring Data project at Pivotal. He is an active member of the JCP expert group on JPA 2.1 and one of the main organizers of the JUG Saxony Day, OOP, JAX and WJAX conferences.

Oliver coined the Spring Data repository programming model which is a widely used Java abstraction to develop data access layers for relational and non-relational databases. This simplifies the way Java developers interact with persistence technologies as Spring Data provides an abstraction over APIs such as JPA. He is one of the leading experts on JPA and other persistence technologies. With Spring Data REST, he helped Java developers implement REST APIs. He also coined the Spring HATEOAS module and helped Java developers use hypermedia elements in REST APIs when using Spring MVC or JAX-RS.

Oliver is a consulting lecturer at the Chair of Software Engineering at TU Dresden helping students to get started with application development in Java. All of his material is available online: This makes it easy for student developers to experiment with Java and receive a professional introduction to the language and Java development practices. 

Oliver contributes almost daily to diverse Open Source frameworks on Github, see  He is a frequent speaker at many conferences including BEDcon, OOP, JavaZone, Devoxx, SpringOne, JavaOne, JFokus, Øredev to name a few. Follow him at @olivergierke

          From the Just Getting Through Last weeks Stuff Dept: Microsoft / Novell Deal : IT wins due to InterOp!!!!! YAY!   

Originally posted on:

Wow. I get buried for a week and get transported into a parallel universe. Microsoft and Novell make an historic agreement. And while some folks in the open source community aren't happy, it seems most (including me) think this is a pretty good deal for building software in general... I mean, being able to just these three things:
  • "...Microsoft and Novell will jointly develop a compelling virtualization offering for Linux and Windows..."
- Right on!
  • "...make it easier for customers to federate Microsoft Active Directory with Novell eDirectory"
- This has been a pain...
  • "...will take steps to make translators available to improve interoperability between Open XML and OpenDocument formats"
Nice! Guys, in the words of the great philosopher and sage, Rodney King "Can't we all just get along?"... I think that this agreement moves all of us who build software one step closer to doing just that.
          Data Tidying and Visualization with R   
Default thumb Thu, Aug 03 02:00 PM until 03:30 PM Eastern Time (US & Canada)
In this hands-on workshop, we will practice these skills by tidying and visualizing "messy" datasets using the free, open source programming language R.
Location: Instructional Center, 37 Dewey Field Road

          OSS víkend 2017: open source je použitelný i v kanceláři   
V půlce června se v Bratislavě uskutečnil už čtvrtý ročník setkání příznivců otevřených technologií. Mluvilo se například o webových trendech, bezpečnosti i použití open source v kancelářské praxi. Web se za poslední desetiletí zásadním způsobem změnil. Dříve byl web především dokument, dnes je web hlavně zážitek. Ne vždy to ale všichni respektují. Příkladem je nedávno spuštěná webová aplikace celní deklarace, která představuje jen padesát let staré papírové dokumenty přenesené do podoby webové stránky, uvádí svou přednášku Marek Galiński ze společnosti RegEx, zabývající se webovými projekty.
          GitHub launches ‘Open Source Friday’ initiative to encourage community contributions   

GitHub has announced a new initiative called ‘Open Source Friday’ to encourage individual developers and companies to focus on open source developments. The online ...

The post GitHub launches ‘Open Source Friday’ initiative to encourage community contributions appeared first on Open Source For You.

          Использование Pinba в Badoo: то, чего вы еще не знаете   

Привет, Хабр! Меня зовут Денис, я – PHP-разработчик в Badoo, и сейчас я расскажу, как мы сами используем Pinba. Предполагается, что вы уже знаете, что это за инструмент, и у вас есть опыт его эксплуатации. Если нет, то для ознакомления рекомендую статью моего коллеги, Максима Матюхина.

Вообще на Хабре есть достаточно материалов об использовании Pinba в различных компаниях, включая пост Олега Ефимова в нашем блоге. Но все они касаются других компаний, а не Badoo, что немного нелогично: сами придумали инструмент, выложили в open source и не делимся опытом. Да, мы часто упоминаем Pinba в различных публикациях и в докладах на IT-конференциях, но обычно это выглядит как-то так: «А вот эти замечательные графики мы получили по данным из Pinba» или «Для измерения мы использовали Pinba», и всё.

Общение с коллегами из других компаний показало две вещи: во-первых, достаточно много людей используют Pinba, а во-вторых, часть из них не знают или не используют все возможности этого инструмента, а некоторые не до конца понимают его предназначение. Поэтому я постараюсь рассказать о тех нюансах, которые явно не указаны в документации, о новых возможностях и наиболее интересных кейсах применения Pinba в Badoo. Поехали!

Читать дальше →
          How to Set Up Git Version Control For Your Drupal Website   
How to Set Up Git Version Control For Your Drupal Website
David Csonka Mon, 01/30/2017 - 05:01

Having version control for your Drupal website is an incredibly useful tool. It not only benefits the programming workflow if you are are involved with custom module development, but also is beneficial if your primary concern is just support for Drupal core and module updates.

Version control helps make it easier to deploy new code, keep track of changes in your code, and if there are problems, to roll back your code to the previous version from before the error came about. If you are using applying updates or changes directly to your code (definitely a bad idea for a live website) not having version control in place can make rolling back changes very laborious and could potentially introduce even more errors.

"Git" With the Program!

Git is a popular version control system that is used extensively throughout the open source web development community.

You can download Git easily from this webpage: with additional information about how to install Git provided here:

Now, it's one thing to just install Git, but it's another thing to make use of version control like Git in a way that is beneficial to your workflow, and helps keep your Drupal websites in good working order during updates. Here are some guidelines to help you along the way.

1. Set up a central or remote repository to work off of. A repository is a term for a contained set of files and commit objects for a Git controlled project. It is like a history of the changes made to the project over time. By having a remote repository to act as your official source (perhaps at a provider like it can function as a sort of backup for your code. Additionally, it can be helpful as the transition space to provide for moving your project and it's code between different server environments, like with a Dev->Staging->Production workflow.

2. Use forks and branches so that you aren't always immediately pushing changes to your primary repository. Sometimes you may want to test changes and it could take a little way to do that, but in the meantime, an important emergency patch needs to be applied. If you are constantly pushing changes to your primary repository, the one you use to update your production website, then you may get in a situation where you have to pull in changes that you aren't ready to deploy yet.

You can learn more about Git "branches" and a "forking workflow" with these helpful tutorials (branches) and (forking) and a more advanced fork/pull-request workflow for teams .

One of the most important things to remember is that while version control can help you out in a tough situation by making it easier to rollback or update code, it won't actually make sure your code is well formulated or your patches don't create other problems. Always be sure to test your updates thoroughly.


Need more help working with version control?

We might be able to help you! Contact Us

          Partitioning in PostgreSQL   
Partitioning in PostgreSQL

Partitioning refers to splitting a large table into smaller pieces. This article covers the basics of partitioning in PostgreSQL. Currently, PostgreSQL supports range and ...

The post Partitioning in PostgreSQL appeared first on Open Source For You.

           Redis 代理服务Twemproxy    

1、twemproxy explore

      当我们有大量 Redis 或 Memcached 的时候,通常只能通过客户端的一些数据分配算法(比如一致性哈希),来实现集群存储的特性。虽然Redis 2.6版本已经发布Redis Cluster,但还不是很成熟适用正式生产环境。 Redis 的 Cluster 方案还没有正式推出之前,我们通过 Proxy 的方式来实现集群存储

       Twitter,世界最大的Redis集群之一部署在Twitter用于为用户提供时间轴数据。Twitter Open Source部门提供了Twemproxy。

     Twemproxy,也叫nutcraker。是一个twtter开源的一个redis和memcache代理服务器。 redis作为一个高效的缓存服务器,非常具有应用价值。但是当使用比较多的时候,就希望可以通过某种方式 统一进行管理。避免每个应用每个客户端管理连接的松散性。同时在一定程度上变得可以控制。

      Twemproxy是一个快速的单线程代理程序,支持Memcached ASCII协议和更新的Redis协议:

     它全部用C写成,使用Apache 2.0 License授权。项目在Linux上可以工作,而在OSX上无法编译,因为它依赖了epoll API.

      Twemproxy 通过引入一个代理层,可以将其后端的多台 Redis 或 Memcached 实例进行统一管理与分配,使应用程序只需要在 Twemproxy 上进行操作,而不用关心后面具体有多少个真实的 Redis 或 Memcached 存储。 


    • 支持失败节点自动删除

      • 可以设置重新连接该节点的时间
      • 可以设置连接多少次之后删除该节点
      • 该方式适合作为cache存储
    • 支持设置HashTag

      • 通过HashTag可以自己设定将两个KEYhash到同一个实例上去。
    • 减少与redis的直接连接数

      • 保持与redis的长连接
      • 可设置代理与后台每个redis连接的数目
    • 自动分片到后端多个redis实例上

      • 多种hash算法:能够使用不同的策略和散列函数支持一致性hash。
      • 可以设置后端实例的权重
    • 避免单点问题

      • 可以平行部署多个代理层.client自动选择可用的一个
    • 支持redis pipelining request


    • 支持状态监控

      • 可设置状态监控ip和端口,访问ip和端口可以得到一个json格式的状态信息串
      • 可设置监控信息刷新间隔时间
    • 高吞吐量

      • 连接复用,内存复用。
      • 将多个连接请求,组成reids pipelining统一向redis请求。

     另外可以修改redis的源代码,抽取出redis中的前半部分,作为一个中间代理层。最终都是通过linux下的epoll 事件机制提高并发效率,其中nutcraker本身也是使用epoll的事件机制。并且在性能测试上的表现非常出色。


Twemproxy 由于其自身原理限制,有一些不足之处,如: 
  • 不支持针对多个值的操作,比如取sets的子交并补等(MGET 和 DEL 除外)
  • 不支持Redis的事务操作
  • 出错提示还不够完善
  • 也不支持select操作


Twemproxy 的安装,主要命令如下: 
apt-get install automake  
apt-get install libtool  
git clone git://  
cd twemproxy  
autoreconf -fvi  
./configure --enable-debug=log  
src/nutcracker -h

      listen: #使用哪个端口启动Twemproxy  
      redis: true #是否是Redis的proxy  
      hash: fnv1a_64 #指定具体的hash函数  
      distribution: ketama #具体的hash算法  
      auto_eject_hosts: true #是否在结点无法响应的时候临时摘除结点  
      timeout: 400 #超时时间(毫秒)  
      server_retry_timeout: 2000 #重试的时间(毫秒)  
      server_failure_limit: 1 #结点故障多少次就算摘除掉  
      servers: #下面表示所有的Redis节点(IP:端口号:权重)  
      redis: true  
      hash: fnv1a_64  
      distribution: ketama  
      auto_eject_hosts: false  
      timeout: 400  

你可以同时开启多个 Twemproxy 实例,它们都可以进行读写,这样你的应用程序就可以完全避免所谓的单点故障。

abin 2015-11-03 19:30 发表评论

          Mozilla Firefox 55.0b6 Beta / 56.0a1 Nightly    
Mozilla Firefox is a fast, free and Open Source web browser that provides you with a highly customizable interface with numerous third-party add-ons, as well as Mozilla authored add-ons to choose from...
          Chromium 61.0.3146   
Chromium is the open source web browser project from which Google Chrome draws its source code....
          Calibre 3.2.1   
Calibre is an all-in-one Open Source tool to manage easily and view all of your electronic books....
          Platform Delivery Manager (Documentum, Pega) / HM Revenue and Customs / Telford, Shropshire, United Kingdom   
HM Revenue and Customs/Telford, Shropshire, United Kingdom

Platform Delivery Manager (Testing, Scrum, Manager)

Salary: £59,375-£65,625

Location: Telford and Newcastle (Regular travel to various client sites will be required as part of this role.)

Full Time

With 60000+ staff and 50m customers HMRC is one of the biggest organisations in the UK, running the largest digital operation in Government and one of the biggest IT estates in Europe. We have six modern, state-of-the-art digital delivery centres where multiple cross functional agile teams thrive in one of the most dynamic and innovative environments in the UK. We are expanding our Case Management Delivery Group and are recruiting into a number of posts within the Revenue & Customs Digital Technology Service in Telford and Newcastle.

Summary of the Platform Delivery Manager (Testing, Scrum, Manager) role:

Reporting initially to the Head of Case Management and subsequently to Head of Platform Delivery (HoPD) the Platform Delivery Manager will be a strategic leader who champions the use of technology to deliver innovative and effective solutions.

As part of CMDG Transformation project this new role has been created. A successful candidate will be responsible for shaping, leading and developing an effective Platform Delivery team, with overall accountability for the End to End delivery of software solutions and the provision of resilient live service operations.

The Platform Delivery Manager (Testing, Scrum, Manager) will look after their customers directly by managing the change delivered and the DevOps needs of the systems they use.


The Platform Delivery Manager (Testing, Scrum, Manager) will

• Lead software delivery and Dev Ops teams and will be able to channel expert knowledge from many different sources to drive disciplines, standards and processes within the Platform and Delivery Group.

• Manage stakeholders from several different areas each with their own agenda; including Senior Managment, Business End Users, Suppliers, Architects, Project Managers, Business Analysts, Infrastructure and Software experts.

• Plan, schedule and monitor work to meet rigorous time and quality standards. Ensure risk management compliance within Platform, ensuring that all functional, technical and business related issues are managed through effective mitigation planning and techniques

• Manage a team circa 30 permanent staff and flex resources and understand resourcing models to support changing business demand signals.

• Take responsibility for strategic initiatives that affect the platform, report into CMDG's Leadership or Executive teams.

• Develop a strong relationship with their business and Service owners and be a credible performer at executive and business forums.

• Own key relationship with Central Live Service Operations to ensure all service needs are met in line with targets through provision of highly effective application maintenance and support activities that ensure a high quality of service to the customer.

• Be accountable for relationships within the Platform Delivery Team to ensure that CMDG benefits strategically and commercially through world class delivery.

• Rapidly absorb new information, apply and share it effectively.

• Other information

Essential Knowledge, Criteria and Skills of the Platform Delivery Manager (Testing, Scrum, Manager)

• Extensive IT experience, including running and managing technical projects and or services to tight deadlines and budgets with a track record of delivering innovative solutions as value add to the Customer.

• Thorough grasp of the full range of IT issues encountered within project delivery and demonstrable experience of driving resolution

• A strong technical awareness of existing and developing technologies and their application.

• Experience of managing multiple parties including third party vendors on enterprise scale solutions.

• Leadership - Experience of sharing knowledge, with good mentoring and coaching skills

• Able to motivate staff to work together to deliver on time and to high quality.

• Outstanding communication skills. The ability to tailor communication both to business and technical audiences as well as to various roles and levels in the organization is essential.

• Ability to influence others and where necessary to make difficult decisions, managing any potential resultant conflict.

• A proactive approach to problem solving.


• Knowledge and awareness of all components involved in software delivery: Storage, Servers, Operating Systems, Network, Cloud, Data Management, External Data Centre, Service Transition, ITSM, SaaS

• Experience with Agile methodologies (TDD, SCRUM, Kanban, etc.) and of working within a multi-team scaled agile environment.

Demonstrable experience of

• practical understanding of how Dev Ops teams are set up and operate

• managing and deploying on Cloud based platforms

• continuous integration, automated deployment, testing and the relevant tooling such as Puppet, Chef and Jenkins.

• release and configuration management processes

• Installation and management of open source monitoring tools

Person Specification

• Demonstrates confident, professional approach

• Negotiating skills

• Problem solving and analytical skills

• Communication skills at all levels

• Attention to detail and accuracy

• Comfortable making difficult decisions

Equivalent SFIA level: 5

To apply for the role of Platform Delivery Manager (Testing, Scrum, Manager), please click 'apply now'.

CVs should clearly demonstrate how the candidate meets the essential criteria and qualifications stated above.

Sift Process

Applicants will be sifted based upon contents of the CV providing evidence of the essential criteria.

Interview panels for all roles may take place in Telford.

Employment Type: Permanent

Pay: 59,375 to 65,625 GBP (British Pound)
Pay Period: Annual
Other Pay Info: £59,375 - £65,625

Apply To Job
          Weekend project: Ghetto RPC with redis, ruby and clojure   

There’s a fair amount of things that are pretty much set on current architectures. Configuration management is handled by chef, puppet (or pallet, for the brave). Monitoring and graphing is getter better by the day thanks to products such as collectd, graphite and riemann. But one area which - at least to me - still has no obvious go-to solution is command and control.

There are a few choices which fall in two categories: ssh for-loops and pubsub based solutions. As far as ssh for loops are concerned, capistrano (ruby), fabric (python), rundeck (java) and pallet (clojure) will do the trick, while the obvious candidate in the pubsub based space is mcollective.

Mcollective has a single transport system, namely STOMP, preferably set-up over RabbitMQ. It’s a great product and I recommend checking it out, but two aspects of the solution prompted me to write a simple - albeit less featured - alternative:

  • There’s currently no other transport method than STOMP and I was reluctant to bring RabbitMQ into the already well blended technology mix in front of me.
  • The client implementation is ruby only.

So let me here engage in a bit of NIHilism and describe a redis based approach to command and control.

The scope of the tool would be rather limited and only handle these tasks:

  • Node discovery and filtering
  • Request / response mechanism
  • Asynchronous communication (out of order replies)

Enter redis

To allow out of order replies, the protocol will need to broadcast requests and listen for replies separately. We will thus need both a pub-sub mechanism for requests and a queue for replies.

While redis is initially an in-memory key value store with optional persistence, it offers a wide range of data structures (see the full list at and pub-sub support. No explicit queue function exist, but two operations on lists provide the same functionality.

Let’s see how this works in practice, with the standard redis-client redis-cli and assuming you know how to run and connect to a redis server:

  1. Queue Example

    Here is how to push items on a queue named my_queue:

    redis> LPUSH my_queue first
    (integer) 1
    redis> LPUSH my_queue second
    (integer) 2
    redis> LPUSH my_queue third
    (integer) 3

    You can now subsequently issue the following command to pop items:

    redis> BRPOP my_queue 0
    1) "my_queue"
    2) "first"
    redis> BRPOP my_queue 0
    1) "my_queue"
    2) "second"
    redis> BRPOP my_queue 0
    1) "my_queue"
    2) "third"

    LPUSH as its name implies pushes items on the left (head) of a list, while BRPOP pops items from the right (tail) of a list, in a blocking manner, with a timeout argument which we set to 0, meaning that the action will block forever if no items are available for popping.

    This basic queue mechanism is the main mechanism used in several open source projecs such as logstash, resque, sidekick, and many others.

  2. Pub-Sub Example

    Queues can be subscribed to through the SUBSCRIBE command, you’ll need to open two clients, start by issuing this in the first:

    redis> SUBSCRIBE my_exchange
    Reading messages... (press Ctrl-C to quit)
    1) "subscribe"
    2) "my_hub"
    3) (integer) 1

    You are now listening on the my_exchange exchange, issue the following in the second terminal:

    redis> PUBLISH my_exchange hey
    (integer) 1

    You’ll now see this in the first terminal:

    1) "message"
    2) "my_hub"
    3) "hey"
  3. Differences between queues and pub-sub

    The pub-sub mechanism in redis, broadcasts to all subscribers and will not queue up data for disconnect subscribers, where-as queues will deliver to the first available consumer, but will queue up (in RAM, so make sure of your consuming ability)

Designing the protocol

With the following building blocks in place, a simple layered protocol can be designed offering the following functionality, offering the following workflow:

  • A control box broadcasts a requests with a unique ID (UUID), with a command and node specification
  • All nodes matching the specification reply immediately with a START status, indicating that the requests has been acknowledged
  • All nodes refusing to go ahead reply with a NOOP status
  • Once execution is finished, nodes reply with a COMPLETE status

Acknowledgments and replies will be implemented over queues, solely to demonstrate working with queues, using pub-sub for replies would lead to cleaner code.

If we model this around JSON, we can thus work with the following payloads, starting with requests:

request = {
  reply_to: "51665ac9-bab5-4995-aa80-09bc79cfb2bd",
  match: {
    all: false, /* setting to true matches all nodes */
    node_facts: {
      hostname: "www*" /* allowing simple glob(3) type matches */
  command: {
    provider: "uptime",
    args: { 
     averages: {
       shortterm: true,
       midterm: true,
       longterm: true

START responses would then use the following format:

response = {
  in_reply_to: "51665ac9-bab5-4995-aa80-09bc79cfb2bd",
  uuid: "5b4197bd-a537-4cc7-972f-d08ea5760feb",
  hostname: "",
  status: "start"

NOOP responses would drop the sequence UUID not needed:

response = {
  in_reply_to: "51665ac9-bab5-4995-aa80-09bc79cfb2bd",
  hostname: "",
  status: "noop"

Finally, COMPLETE responses would include the result of command execution:

response = {
  in_reply_to: "51665ac9-bab5-4995-aa80-09bc79cfb2bd",
  uuid: "5b4197bd-a537-4cc7-972f-d08ea5760feb",
  hostname: "",
  status: "complete",
  output: {
    exit: 0,
    time: "23:17:20",
    up: "4 days, 1:45",
    users: 6,
    load_averages: [ 0.06, 0.10, 0.13 ]

We essentially end up with an architecture where each node is a daemon while the command and control interface acts as a client.

Securing the protocol

Since this is a proof of concept protocol and we want implementation to be as simple as possible, a somewhat acceptable compromise would be to share an SSH private key specific to command and control messages amongst nodes and sign requests and responses with it.

SSL keys would also be appropriate, but using ssh keys allows the use of the simple ssh-keygen(1) command.

Here is a stock ruby snippet, gem which performs signing with an SSH key, given a passphrase-less key.

require 'openssl'

signature = '/path/to/private-key' do |file|
  digest = OpenSSL::Digest::SHA1.digest("some text")

To verify a signature here is the relevant snippet:

require 'openssl'

valid? = '/path/to/private-key' do |file|"some text", sig)

This implements the common scheme of signing a SHA1 digest with a DSA key (we could just as well sign with an RSA key by using OpenSSL::PKey::RSA)

A better way of doing this would be to sign every request with the host’s private key, and let the controller look up known host keys to validate the signature.

The clojure side of things

My drive for implementing a clojure controller is integration in the command and control tool I am using to interact with a number of things.

This means I only did the work to implement the controller side of things. Reading SSH keys meant pulling in the bouncycastle libs and the apache commons-codec lib for base64:

(import '[                   Signature Security KeyPair]
        '[org.bouncycastle.jce.provider   BouncyCastleProvider]
        '[org.bouncycastle.openssl        PEMReader]
        '[org.apache.commons.codec.binary Base64])
(require '[ :as io])

(def algorithms {:dss "SHA1withDSA"
                 :rsa "SHA1withRSA"})

;; getting a public and private key from a path
(def keypair (let [pem (-> (PEMReader. (io/reader "/path/to/key")) .readObject)]
               {:public (.getPublic pem)
                :private (.getPrivate pem)}))

(def keytype :dss)

(defn sign
  (-> (doto (Signature/getInstance (get algorithms keytype))
        (.initSign (:private keypair))
        (.update (.getBytes str)))

(defn verify
  [content signature]
  (-> (doto (Signature/getInstance (get algorithms keytype))
        (.initVerify (:public keypair))
        (.update (.getBytes str)))
      (.verify (-> signature Base64/decodeBase64))))

Redis support has several options, I used the jedis Java library which has support for everything we’re interested in.

Wrapping up

I have early - read: with lots of room for improvements, and a few corners cut - implementations of the protocol, both the agent and controller code in ruby, and the controller code in clojure, wrapped in my IRC bot in clojure, which might warrant another article.

The code can be found here: (name alternatives welcome!)

If you just want to try out, you can fetch the amiral gem in ruby, and start an agent like so:

$ amiral.rb -k /path/to/privkey agent

You can then test querying the agent through a controller:

$ amiral.rb -k /path/to/privkey controller uptime
accepting acknowledgements for 2 seconds
got 1/1 positive acknowledgements
got 1/1 responses 09:06:15 up 5 days, 10:48, 10 users,  load average: 0.08, 0.06, 0.05

If you’re feeling adventurous you can now start the clojure controller, it’s configuration is relatively straightforward, but a bit more involved since it’s part of an IRC + HTTP bot framework:

{:transports {amiral.transport.HTTPTransport {:port 8080}
              amiral.transport.irc/create    {:host ""
                                              :channel "#mychan"}}
 :executors {amiral.executor.fleet/create    {:keytype :dss
                                              :keypath "/path/to/key"}}}

In that config we defined two ways of listening for incoming controller requests: IRC and HTTP, and we added an “executor” i.e: a way of doing something.

You can now query your hosts through HTTP:

$ curl -XPOST -H 'Content-Type: application/json' -d '{"args":["uptime"]}' http://localhost:8080/amiral/fleet
 "message":" 09:40:57 up 5 days, 11:23, 10 users,  load average: 0.15, 0.19, 0.16",
                     "since":"5 days, 11:23",
                     "short":"09:40:57 up 5 days, 11:23, 10 users,  load average: 0.15, 0.19, 0.16"},

Or on IRC:

09:42 < pyr> amiral: fleet uptime
09:42 < amiral> pyr: waiting 2 seconds for acks
09:43 < amiral> pyr: got 1/1 positive acknowledgement
09:43 < amiral> pyr: got 1 responses
09:43 < amiral> pyr: 09:42:57 up 5 days, 11:25, 10 users,  load average: 0.16, 0.20, 0.17

Next Steps

This was a fun experiment, but there are two outstanding problems which will need to be addressed quickly

  • Tests test tests. This was a PoC project to start with, I should have known better and wrote tests along the way.
  • The queue based reply handling makes controller logic complex, and timeout handling approximate, it should be switched to pub-sub
  • The signing should be done based on known hosts’ public keys instead of the shared key used now.
  • The agent should expose more common actions: service interaction, puppet runs, etc.

          Another year of Clojure   

Clojure at

I’ve been involved with clojure almost exclusively for a year as smallriver’s lead architect, working on the product and wanted to share my experience of clojure in the real world.

I had a previous experience with clojure where I put it to work where ruby on rails wasn’t a natural fit, and although smallrivers is a close neighbor of typesafe in Switzerland, my previous experience with the language made it prevail on scala.

Why clojure ?

While working on the backend architecture at a previous company I decided to evaluate three languages which met the needs I was faced with:

  • erlang
  • scala
  • clojure

I decided to tackle the same simple task in all three languages and see how each would fare and how I felt about them. The company’s language at that time was Ruby and JS, and coming from a C background, I wanted a language which provided simplicity, good data structure support and concurrency features, while allowing us to still code quickly.

While naturally drawn to Erlang, I quickly had to set it apart because the stack that was starting to emerge at the time had JVM based parts and would benefit greatly from a language targetting the JVM. I was a bit bummed because some tools in the erlang world were very exciting and the lightweight actors were interesting for a part of our stack.

Scala made a very strong first impression on me, but in practice I was taken aback by some aspects of it: the lack of coherency of open source projects found on the net in terms of style, which made it hard to see which best practices and guidelines would have to be taught to the team, some of the code I found was almost reminiscent of perl a few year back, in the potential it had to become unmaintainable some time later. The standard build tool - SBT - also made a very weak impression. It seemed to be a clear step back from maven which given the fact that maven isn’t a first class citizen in the scala world seemed worrying.

Clojure took the cake, in part because it clicked with the lisper in me, in part because the common idioms that emerged from the code I read bore a lot of similarity with the way we approached ruby. The dynamic typing promised succinct code and the notation for vectors, maps and sets hugely improved the readability of lisp - look at how hashes work in emacs lisp if you want to know what i mean. I was very excited about dosync and a bit worried by the lack of leightweight erlang style actors even though I could see how agent’s could help in that regard. As I’ll point out later on, we ended up not using these features at all anyhow.

The task at hand

When I joined Smallrivers to work on, it became natural to choose clojure. The team was small and I felt comfortable with it. There was a huge amount of work which needed to be started quickly so a “full-stack” language was necessary to avoid spreading across too many languages and technologies, and another investigation in how the other languages had evolved in the meantime was not possible. The main challenges to tackle were:

  • Being able to aggregate more content
  • Improve the quality of the processing done on content
  • Scaling the storage cluster accordingly
  • Automate the infrastructure

The “hiring” problem

One thing that always pops up in discussions about somewhat marginal languages is the hiring aspect, and the fear that you won’t be able to find people if you “lock” yourself in a language decision that strays from the usual suspects. My experience is that when you tackle big problems, that go beyond simple execution but require actual strong engineers, hiring will be a problem, there’s just no way around it. Choosing people that fit your development culture and see themselves fit to tackle big problems is a long process, integrating them is also time consuming. In that picture, the chosen language isn’t a huge deciding factor.

I see marginal languages as a problem in the following organisations:

  • Companies tackling smaller problems, or problems already solved. These are right in choosing standard languages, if I built a team to build an e-commerce site I wouldn’t go to clojure.
  • Larger companies which want their employees to jump from project to project, which makes sense from a managerial standpoint.

What we built

The bulk of what was done revolves around these functional items:

  • A platform automation tool, built on top of pallet.
  • Clojure facades for the tools relied upon (elastic search, cassandra, redis, kafka).
  • An ORM-type layer on top of cassandra
  • Our backend pipelines

I won’t go in too much detail on our in-house code, but rather reflect on how things went over.

Coding style and programming “culture”

One of the advantages of lisp, is that it doesn’t have much syntax to go around, so our rules stay simple:

  • the standard 2 space indent
  • we try to stick to 80 columns, because i’m that old
  • we always use require except for: and pallet.thread-expr which are use’d
  • we avoid macros whenever possible
  • we use dynamically rebindable symbols

Of course we embraced non mutable state everywhere possible, which in our case is almost everywhere. Whenever we need to checkpoint state, it usually goes to our storage layer, not to in memory variables.

When compared to languages such as C, I was amazed at how little rules are needed to enforce a consistent code look across projects, with very little time needed to dive into a part written by someone else.

The tools

  1. Local environment

    We didn’t settle on a unique tool-suite at the office, when picking up clojure I made the move from vim to emacs because the integration is better and I fell in love with paredit. Spread amongst the rest of team, textmate, eclipse and intellij were used.

    For building projects, leiningen was an obvious choice. I think leiningen is a great poster child for the greatest in clojure. A small and intelligent facade on top of maven, hiding all the annoying part of maven while keeping the nice distribution part.

    For continuous integration, we wrote a small bridge between leiningen and zi lein-zi which outputs pom.xml for maven, which are then used to build the clojure projects. We still hope to find some time to write a leiningen plugin for jenkins.

  2. Asynchronous programming

    Since a good part of what does relies on aggregation, async programming is very important. In the pure clojure world, the only real choice for async programming is lamina and aleph. To be honest, aleph turned out to be quite the challenge, a combination of the amount of outbound connections that our work requires and the fact that aleph seems to initially target servers more than clients.

    Fortunately Zach Tellman put a lot of work into the library throughout last year and recent releases are more reliable. One very nice side effect of using a lisp to work with evented code is how readable code becomes, by retaining a sync like look.

    For some parts we still would directly go to a smaller netty facade if we were to start over, but that’s a direct consequence of how much we learned along the way.

  3. Libraries not frameworks

    A common mantra in the clojure development community is that to ease integration the focus should be on libraries, not frameworks. This shows in many widespread projects such as compojure, pallet, and a host of common clojure tools. This proved very useful to us as clients of these libraries, allowing easy composition. I think pallet stands out most in that regard. Where most configuration management solutions offer a complete framework, pallet is just a library offering machine provisioning, configuration and command and control, which allowed us to integrate it with our app and build our abstractions on top of it.

    We tried to stick to that mantra in all of our work, building many small composable libraries, we made some errors at the beginning, by underutilizing some of clojure features, such as protocols but we now have good dynamics for writing these libraries, by writing the core of them with as little dependencies as possible, describing the behavior through protocols, and then writing add-ons which bring in additional dependencies and implement the protocol.

  4. Macros and DSLs

    Another common mantra is to avoid overusing macros. It can’t be overstated how easy they make things though, our entity description library (which we should really prep up for public release, we’ve been talking about it for too long now) allows statements such as these (simplified):

    (defentity :contributors
      (column :identifier (primary-key))
      (column :type (required))
      (column :name)
      (column :screen_name (index))
      (column :description)
      (column :detail (type :compound))
      (column :user_url)
      (column :avatar_url)
      (column :statuses_count (type :number))
      (has-many :articles)
      (has-many :editions (referenced false) (ttl 172800))
      (has-many :posts (key (timestamp :published_at)) (referenced false)))

    The power of DSLs in clojure cannot be understated, with a few macros you can easily build full languages, allowing easy extending of the functionality. Case in point, extracting text from articles, like most people we rely on a generic readability type library, but we also need to handle some sites that need special handling. By using a small DSL you can easily push rules that look like (simplified):

    (defsiterule ""
       (-> dom
           (pull "#stupid-article-id")))

    The great part is that you limit the knowledge to be transfered over to people writing the rules, you avoid intrusive changes to the core of your app and these can safely be pulled from an external location.

    At the end of the day, it seems to me as though the part of the clojure community that came from CL had awful memories of macros making code unreadable, but when sticking to macros with a common look and feel, i.e: with-<resource>, def<resource> type macros, there are huge succintness take aways without hindering readability or maintenance of the code.

  5. Testing

    Every respectable codebase is going to need at least a few test. I’m of the pragmatist church, and straight out do not believe in TDD, neither in crazy coverage ratios. Of course we still have a more that 95% unit test coverage and the decoupled approach preached by clojure’s original developer, rich hickey1 allows for very isolated testing. For cases that require mocking, midge provides a nice framework and using it has created very fruitful throughout our code.

  6. Concurrency, Immutable State and Data Handling

    Funnily, we ended up almost never using any concurrency feature, not a single dosync made it in our codebase, few atom’s and a single agent (in to avoid recreating a Socket object for each datagram sent). We also banned future usage to more closely control our thread pools. Our usage of atom’s is almost exclusively bound to things that are write once / read many, in some cases we’d be better off with rebindable dynamic symbols.

    We rely on immutable state heavily though, and by heavily I actually mean exclusively. This never was a problem across the many lines of code we wrote, and helped us keep a sane and decoupled code base.

    With facades allowing to represent database fields, queue entries, and almost anything as standard clojure data structures and with powerful functions to work on them, complex handling of a large amount of data is very easily expressed. For this we fell in love with several tools which made things even easier:

    • the threading operators -> and ->>
    • the pallet thread-expr library which brings branching in threaded operations: for->, when->, and so on
    • assoc-in, update-in, seq-utils/index-by and all these functions which allow easy transformation of data structs and retain a procedural look

    I cannot stress how helpful this has been for us in doing the important part of our code right and in a simple manner. This is clearly the best aspect of clojure as far as I’m concerned.

    Moreover, building on top of Java and with the current focus on “Big Data” everywhere, the interaction with large stores and tools to help building batch jobs are simply amazing, especially cascalog.

  7. The case of Clojurescript

    While very exciting we did not have a use for clojurescript, given the size of the existing JS codebase, and the willingness of the frontend developers to stick to a known.

    The simple existence of the project amazes me, especially with the promise of more runtimes, there are various implementations on top of lua, python and gambit (a scheme that compiles to C). With projects like cascalog, pallet, lein, compojure, noir and clojurescript, the ecosystem addresses all parts of almost any stack that you will be tempted to build and we didn’t encounter cases of feeling cornered by the use of clojure - admiteddly, most of the time, a Java library came to the rescue.

  8. The community

    The community is very active, and has not reach critical mass yet, which makes its mailing-list and irc room still usable. There are many influent public figures, some who bring insight, some who bring beautiful code. Most are very open and available to discussion which shaped our approach of the language and our way of coding along the way.

Closing words

It’s been an exciting year and we’re now a full fledged 80% clojure shop. I’m very happy with the result, more so with the journey. I’m sure we could have achieved with other languages as well. As transpires throughout the article, the whole team feels that should we start over, we would do it in clojure again.

It helped us go fast, adapt fast and didn’t hinder us in any way. The language seems to have a bright future ahead of it which is reassuring. I would encourage people coming from python and ruby to consider it as a transition language or as their JVM targetting language, since many habits are still valid in clojure and since it helps slightly change the way we look at problems which can then be reapplied in more “traditional” languages.

  1. Rich hickey’s talk simple made easy and his coining of the term “complecting” illustrates that


          The death of the configuration file   

Taking on a new platform design recently I thought it was interesting to see how things evolved in the past years and how we design and think about platform architecture.

So what do we do ?

As system developers, system administrators and system engineers, what do we do ?

  • We develop software
  • We design architectures
  • We configure systems

But it isn’t the purpose of our jobs, for most of us, our purpose is to generate business value. From a non technical perspective we generate business value by creating a system which renders one or many functions and provides insight into its operation.

And we do this by developing, logging, configuration and maintaining software across many machines.

When I started doing this - back when knowing how to write a sendmail configuration file could get you a paycheck - it all came down to setting up a few machines, a database server a web server a mail server, each logging locally and providing its own way of reporting metrics.

When designing custom software, you would provide reports over a local AF_UNIX socket, and configure your software by writing elegant parsers with yacc (or its GNU equivalent, bison).

When I joined the OpenBSD team, I did a lot of work on configuration files, ask any members of the team, the configuration files are a big concern, and careful attention is put into clean, human readable and writable syntax, additionally, all configuration files are expected to look and feel the same, for consistency.

It seems as though the current state of large applications now demands another way to interact with operating systems, and some tools are now leading the way.

So what has changed ?

While our mission is still the same from a non technical perspective, the technical landscape has evolved and went through several phases.

  1. The first era of repeatable architecture

    We first realized that as soon as several machines performed the same task the need for repeatable, coherent environments became essential. Typical environments used a combination of cfengine, NFS and mostly perl scripts to achieve these goals.

    Insight and reporting was then providing either by horrible proprietary kludges that I shall not name here, or emergent tools such as netsaint (now nagios), mrtg and the like.

  2. The XML mistake

    Around that time, we started hearing more and more about XML, then touted as the solution to almost every problem. The rationale was that XML was - somewhat - easy to parse, and would allow developers to develop configuration interfaces separately from the core functionality.

    While this was a noble goal, it was mostly a huge failure. Above all, it was a victory of developers over people using their software, since they didn’t bother writing syntax parsers and let users cope with the complicated syntax.

    Another example was the difference between Linux’s iptables and OpenBSD’s pf. While the former was supposed to be the backend for a firewall handling tool that never saw the light of day, the latter provided a clean syntax.

  3. Infrastructure as code

    Fast forward a couple of years, most users of cfengine were fed up with its limitations, architectures while following the same logic as before became bigger and bigger. The need for repeatable and sane environments was as important as it ever was.

    At that point of time, PXE installations were added to the mix of big infrastructures and many people started looking at puppet as a viable alternative to cfengine.

    puppet provided a cleaner environment, and allowed easier formalization of technology, platform and configuration. Philosophically though, puppet stays very close to cfengine by providing a way to configure large amounts of system through a central repository.

    At that point, large architectures also needed command and control interfaces. As noted before, most of these were implemented as perl or shell scripts in SSH loops.

    On the monitoring and graphing front, not much was happening, nagios and cacti were almost ubiquitous, while some tools such as ganglia and collectd were making a bit of progress.

Where are we now ?

At some point recently, our applications started doing more. While for a long time the canonical dynamic web application was a busy forum, more complex sites started appearing everywhere. We were not building and operating sites anymore but applications. And while with the help of haproxy, varnish and the likes, the frontend was mostly a settled affair, complex backends demanded more work.

At the same time the advent of social enabled applications demanded much more insight into the habits of users in applications and thorough analytics.

New tools emerged to help us along the way:

  • In memory key value caches such as memcached and redis
  • Fast elastic key value stores such as cassandra
  • Distributed computing frameworks such as hadoop
  • And of course on demand virtualized instances, aka: The Cloud
  1. Some daemons only provide small functionality

    The main difference in the new stack found in backend systems is that the software stacks that run are not useful on their own anymore.

    Software such as zookeeper, kafka, rabbitmq serve no other purpose that to provide supporting services in applications and their functionality are almost only available as libraries to be used in distributed application code.

  2. Infrastructure as code is not infrastructure in code !

    What we missed along the way it seems is that even though our applications now span multiple machines and daemons provide a subset of functionality, most tools still reason with the machine as the top level abstraction.

    puppet for instance is meant to configure nodes, not cluster and makes dependencies very hard to manage. A perfect example is the complications involved in setting up configurations dependent on other machines.

    Monitoring and graphing, except for ganglia has long suffered from the same problem.

The new tools we need

We need to kill local configurations, plain and simple. With a simple enough library to interact with distant nodes, starting and stopping service, configuration can happen in a single place and instead of relying on a repository based configuration manager, configuration should happen from inside applications and not be an external process.

If this happens in a library, command & control must also be added to the mix, with centralized and tagged logging, reporting and metrics.

This is going to take some time, because it is a huge shift in the way we write software and design applications. Today, configuration management is a very complex stack of workarounds for non standardized interactions with local package management, service control and software configuration.

Today dynamically configuring bind, haproxy and nginx, installing a package on a Debian or OpenBSD, restarting a service, all these very simple tasks which we automate and operate from a central repository force us to build complex abstractions. When using puppet, chef or pallet, we write complex templates because software was meant to be configured by humans.

The same goes for checking the output of running arbitrary scripts on machines.

  1. Where we’ll be tomorrow

    With the ease PaaS solutions bring to developers, and offers such as the ones from VMWare and open initiatives such as OpenStack, it seems as though virtualized environments will very soon be found everywhere, even in private companies which will deploy such environments on their own hardware.

    I would not bet on it happening but a terse input and output format for system tools and daemons would go a long way in ensuring easy and fast interaction with configuration management and command and control software.

    While it was a mistake to try to push XML as a terse format replacing configuration file to interact with single machines, a terse format is needed to interact with many machines providing the same service, or to run many tasks in parallel - even though, admittedly , tools such as capistrano or mcollective do a good job at running things and providing sensible output.

  2. The future is now !

    Some projects are leading the way in this new orientation, 2011 as I’ve seen it called will be the year of the time series boom. For package management and logging, Jordan Sissel released such great tools as logstash and fpm. For easy graphing and deployment etsy released great tools, amongst which statsd.

    As for bridging the gap between provisionning, configuration management, command and control and deploys I think two tools, both based on jclouds1 are going in the right direction:

    • Whirr2: Which let you start a cluster through code, providing

    recipes for standard deploys (zookeeper, hadoop)

    • pallet3: Which lets you describe your infrastructure as code and

    interact with it in your own code. pallet’s phase approach to cluster configuration provides a smooth dependency framework which allows easy description of dependencies between configuration across different clusters of machines.

  3. Who’s getting left out ?

    One area where things seem to move much slower is network device configuration, for people running open source based load-balancers and firewalls, things are looking a bit nicer, but the switch landscape is a mess. As tools mostly geared towards public cloud services will make their way in private corporate environments, hopefully they’ll also get some of the programmable

          Artificial Intelligence: Finance Pushes For Open Source Software Agenda   
It makes sense for large technology companies like Google and Microsoft to open source AI and machine learning solutions because they have overlapping vertical interests in providing vast cloud services. These come into play when a certain machine learning library becomes popular and users deploy it on the cloud and so forth. It is less clear why financial services companies, which play a much more directly correlated zero sum game, would open up code that they paid the engineering team to create.
          SUSE Manager 3.1 optimises enterprise DevOps and IT operations   

SUSE has released SUSE Manager 3.1 as its new DevOps and container management solution for enterprises. The new offering is designed to boost DevOps ...

The post SUSE Manager 3.1 optimises enterprise DevOps and IT operations appeared first on Open Source For You.

          Five ways to build your reputation as a niche developer   
Building reputation as a developer

In the competitive world, the new generation of tech professionals is using various ways to market themselves. Building your niche in a particular technology ...

The post Five ways to build your reputation as a niche developer appeared first on Open Source For You.

          Baidu releases open source deep learning benchmark tool to measure inference   
Baidu releases open source deep learning benchmark tool to measure inference

Baidu Research, a division of Chinese Internet giant Baidu, has released its open source deep learning benchmark tool. Called DeepBench, the new solution comes ...

The post Baidu releases open source deep learning benchmark tool to measure inference appeared first on Open Source For You.

          Red Hat Unveils Industry’s First Production-Ready Open Source Hyperconverged Infrastructure   
Red Hat debuts integrated, software-defined compute and storage platform designed for remote sites and edge deployments Red Hat, Inc. (NYSE: RHT), the world’s leading provider of open source solutions, today introduced Red Hat Hyperconverged Infrastructure, the industry’s first production-ready fully open source hyperconverged infrastructure (HCI) solution. By combining innovative virtualization and storage technologies with a […]
          Shark Tank Competitor 4 – Workjam   
Workjam President and CEO Steven Kramer envisions a new retail workforce of mobile, flexible associates who rely on open source systems to manage schedules and tasks in a crowd sourced, democratized paradigm. The cloud-based web and mobile solution allows self-management and provides open source calendars. It eliminates manual processes and adds better communication up and down hierarchy … Continue reading Shark Tank Competitor 4 – Workjam
          Php developer   
Chetu India Private Limited - Delhi - 2+ Years of Experience as PHP Developer. Experience in PHP,Magento & MySQL, various open source frameworks, Strong knowledge ofJavaScript... documentation Candidate should have basic knowledge ofan oops concept. 2+ Years of Experience as PHP Developer. Experience in PHP,Magento & MySQL...
          Site Reliability Engineer (Software) - Indeed - Andhra Pradesh   
Serve as subject matter expert for multiple proprietary and open source technologies. As the world’s number 1 job site, our mission is to help people get jobs....
From Indeed - Fri, 23 Jun 2017 07:16:14 GMT - View all Andhra Pradesh jobs
          2017 OSEHRA Leadership Award Recipients Announced   

The Open Source Electronic Health Record Alliance (OSEHRA) is pleased to announce this year’s OSEHRA Leadership Award winners.

(PRWeb June 30, 2017)

Read the full story at

          (USA-MI-Kalamazoo) Lead Programmer   
KCMHSAS is seeking a full-time Lead Programmer to develop solutions to allow for effective and efficient service delivery. Development will be done in the Microsoft technology stack. (SQLServer, C#, .Net, sharepoint). Deep knowledge of SQLServer Analysis Services, Integration Services, Reporting services are necessary. Open source systems (linux, bsd) and python GIT, knowledge is a plus. Bachelor’s Degree in Information Technology, Computer Science with 3 years of relevant experience required. Understanding of the public mental health system preferred.We offer competitive compensation and fringe benefits, including medical, vision and dental insurance; disability and workers compensation insurance; paid holidays, generous Paid Time Off plan, continuing education, retirement plan and Deferred Compensation Plan.Individuals of diverse racial, ethnic, and cultural backgrounds along with bilingual candidates are encouraged to apply. KCMHSAS is an equal opportunity employer that encourages diversity and inclusion among its workforce. We strive to empower people to succeed. Physical Requirements / Working Conditions: Physical Efforts – Job demands include prolonged sitting and standing as appropriate. May occasionally require light lifting up to 25 pounds, stooping, kneeling, crouching, or bending as appropriate. Requires coordination of hands and/or eye/hand/foot. Working Conditions – Office environment with noise from computers, copy machine, and telephones. Use of video display terminal (VDT) for periods in excess of 30 minutes at a time. Possible eyestrain from extended periods of viewing VDT. May be exposed to bloodborne pathogens, infectious diseases, and parasites. Travel throughout the Kalamazoo area is required.
          2010 Books In Review   

Well, my goal this year was to cut back my reading and I somewhat succeeded. I only read 44 books in 2010, but I saved a lot of time by not writing reviews for most of them. Here’s the list for the year, with links to the ones I have written a review for at this time.

  1. Open Sources 2.0 Edited By Chris DiBona, Danese Cooper, and Mark Stone
  2. The Mythical Man-Month By Frederick Brooks
  3. The Innovator’s Dilemma By Clayton Christensen
  4. Predictably Irrational By Dan Ariely
  5. The Cluetrain Manifesto By Rick Levine, Christopher Locke, Doc Searls, and David Weinberger
  6. The Greatest Benefit to Mankind: A Medical History of Humanity By Roy Porter
  7. Ender’s Game By Orson Scott Card
  8. Guns, Germs, and Steel: The Fates of Human Societies By Jared Diamond
  9. The Future of Ideas: The Fate of the Commons in a Connected World by Lawrence Lessig
  10. The Stand By Stephen King
  11. Biobazaar: The Open Source Revolution and Biotechnology By Janet Hope
  12. The Drawing of the Three: The Dark Tower #2 By Stephen King
  13. The New Complete Joy of Homebrewing By Charile Papazian
  14. The Complete Stories: Volume 1 By Isaac Asimov
  15. Neuromancer By William Gibson
  16. Count Zero By William Gibson
  17. Mona Lisa Overdrive By William Gibson
  18. In Defense of Food: An Eater’s Manifesto By Michael Pollan
  19. The Waste Lands: The Dark Tower #3 By Stephen King
  20. The Dilbert Principle By Scott Adams
  21. Wizard and Glass By Stephen King
  22. Wolves of the Calla By Stephen King
  23. The Song of Susannah By Stephen King
  24. ‘Salem’s Lot By Stephen King
  25. The Dark Tower By Stephen King
  26. The Red Thread of Passion By David Guy
  27. The Cathedral & the Bazaar By Eric Raymond
  28. NurtureShock By Po Bronson and Ashley Merryman
  29. Capitalism, Socialism, and Democracy By Joseph Schumpeter
  30. Sex at Dawn By Christopher Ryan and Cacilda Jetha
  31. Innovation Happens Elsewhere (Did not finish)
  32. Effective C# By Bill Wagner
  33. More Effective C# By Bill Wagner
  34. Programming in Scala By Martin Odersky
  35. Foundation By Isaac Asimov
  36. Foundation and Empire By Isaac Asimov
  37. Second Foundation By Isaac Asimov
  38. Dr. Kinsey and the Institute for Sex Resarch By Wardell Pomeroy
  39. Pain: The Fifth Vital Sign By Marni Jackson
  40. Programming Perl, 3rd Edition By Larry Wall, Tom Christiansen, & Jon Orwant
  41. Joel on Software By Joel Spolsky
  42. Listening To Prozac By Peter D. Kramer
  43. Songs of Distant Earth By Arthur C. Clarke
  44. Speaker for the Dead By Orson Scott Card

          New Job   

I mentioned in my last post that I was learning Perl in the hopes of landing a job. Well, that has now paid off as I will be starting at Summersault next week. I’m pretty excited to get out of working with Microsoft tools. I was worried about getting pigeonholed into that if I took another job with it. While C# is a great language, my moral objections to Microsoft’s business practices far outweigh my love of C#. Now I get to work with a variation of the LAMP stack (FreeBSD, Apache, PostgreSQL, and Perl) as part of a small team. And other people can actually see my work this time. That was sometimes frustrating when writing internal web apps.

This change may effect my open source work with Trac. Summersault does not use it internally (RT seems to be the standard with Perl). Up until now LSR’s use of it was a major motivator for me to get involved. We will see if I am able to sustain interest when I am not using it on a daily basis. If not, I will put out a call for someone to adopt the batch modify plugin. The whiteboard plugin will probably just die. I can’t see anybody else wanting to put the necessary work into it.

          BPM, Books, and ... Perl?   

Due to my lack of posts here I have now scheduled writing time every night for at least a half hour. So far this has proven fruitful. I’ve been hard at work writing a series of posts to finish up My Lombardi Experience. I want to focus on why our BPM experience failed with the hope of preventing other teams from failing in the same way. Right now I have a lot of material that needs massaged into something that I am comfortable posting.

Book reviews may resume as well, but only for those I feel that deserve it. Previously I had been writing something for every book I read, but that became a pain in the ass. The goal was to read less this year, but I still am going to end up in the low 40s. I read one a week last year, so I guess it still counts as less though.

On the coding front I’ve mostly been busy learning Scala and … Perl. That one was certainly a surprise, but a job opportunity came along that required me to learn it. Now hopefully I actually get the job. Unemployment is getting boring. I may write more about Scala and Perl later, but they are so alien to each other that I haven’t even attempted to compare them yet. I can’t imagine too many programmers go from C#, to Scala, to Perl.

Taking the time to learn two new languages has caused my open source work to recede into the background for now. The Trac whiteboard plugin may languish in prototype phase for quite some time now. An Indian company did express interest in hiring me to extend it for their needs, but that never materialized. The batch modify plugin may get some love early next year. I am currently looking at making it aware of custom workflows, but doing so will require a major rewrite. I already started it once, but left the codebase a mess, so I’ll probably start from a new branch.

          Software Developer - (Holyoke)   
Software Developer We are looking for a highly experienced Software Developer and Architect to assist in the design of a next generation web application offering. The ideal candidate has prior experience in designing and deploying web applications from the ground up, as well as understanding of emerging technologies and trends. An important part of this role is examination of current and other open source technologies and working with team members to produce a common blue print for our next gen product.
          Application Development Team - (Holyoke)   
Application Development TeamGRT Corporation is looking for three Java/Oracle specialists to deliver web based application for its client located in Holyoke, MA.Highly experienced Software Developer and Architect - to assist in the design of a next generation web application offering.Two Web Application Developers - to develop and support of all external/internal web related software and applicationsW2 tax term is required for all positions ResponsibilitiesArchitect - architect, design, and code using Spring and other open source technologies while investigate existing and new technologies to build new features and integration pointsDevelopers - code and support external/internal web related software and applications while assisting with the development of test conditions and scenarios. Collaborate with other team members to implement application features, including user interface & business functionalityQualificationsB.S. degree in Computer Science, or equivalent Minimum of 5+ years software development experienceExperience with UNIX operating system, services, and commandsExperience with J2EE, Hibernate, Spring and StrutsExperience with modern front-end Javascript libraries (jQuery)Experience with REST/JSON APIsExperience on application servers such as Apache Tomcat, JBOSS EAP 6.xHands on experience with JAX RS, JAXB, JMS, Spring 4Strong experience in Junit, Mockito, Spring-Test and automated testing in general is MUSTExperience creating/consuming Web servicesExperience with Testing frameworksStrong Experience working with databases -PL/SQLDemonstrates integrity and authenticity Additional QualificationsArchitectExperience in agile methodology5-10+ years' experience writing robust web applications with Spring Framework (Spring Boot, Spring security, Spring MVC, etc.) using JavaFamiliarity with GIT, CVS source code management toolDeveloper 1Java frameworks especially micro service architectureJava framework and messaging architectureDeveloper 2Experience with SalesForce APIStrong experience in Enterprise Application Integration patterns (EAI)If you are interested, please apply to the positions providing the following:Indicate position you applyYour current/desired compensationDay time phone number Authorization statusthomas.simpson@grtcorp.comRegards,Thomas SimpsonHR SpecialistGRT CorporationStamford, CT 06901Web: , J2EE, JSON, Spring 4, Hibernate, Struts, Jaxb, Jaxr, PL/SQL Source:
          Senior Site Reliability Engineer - (Watertown)   
ID 2017-1880Job Location(s) US-MA-WatertownPosition Type Permanent - Full TimeMore information about this job:Overview: This role is based within our Global Technical Operations team. Mimecast Engineers are technical experts who love being in the centre of all the action and play a critical role in making sure our technology stack is fit for purpose, performing optimally with zero down time.In this high priority role you will tackle a range of complex software and system issues, including monitoring of large farms of servers in multi geographic locations, responding to and safeguarding the availability and reliability of our most popular services.Responsibilities: ResponsibilitiesContribution and active involvement with every aspect of the production environment to include:Dealing with design issues.Running large server farms in multiple geographic locations around the world.Performance analysis.Capacity planning.Assessing applications behavior.Linux engineering and systems administration.Architecting and writing moderately-sized tools.You will focus on solving difficult problems with scalable, elegant and maintainable solutions. Qualifications: RequirementsEssential skills and experience:In depth expertise in Linux internals and system administration including configuration and troubleshooting.Hands on experience with performance tuning of Linux OS (CentOS) in identifying bottlenecks such as disk I/O, memory, CPU and network issues.Extensive experience with at least one scripting language apart from BASH (Ruby, Perl, Python).Strong understanding of TCP/IP networking, including familiarity with concepts such as OSI stack.Ability to analyze network behaviour, performance and application issues using standard tools.Hands on experience automating the provisioning of servers at a large scale (using tools such as Kickstart, Foreman etc).Hands on experience in configuration management of server farms (using tools such as mcollective, Puppet, Chef, Ansible etc).Hands on experience with open source monitoring and graphing solutions such as Nagios, Zabbix, Sensu, Graphite etc.Strong understanding of common Internet protocols and applications such as SMTP, DNS, HTTP, SSH, SNMP etc.Experience running farms of servers (at least 200+ physical servers) and associated networking infrastructure in a production environment.Hands on experience working with server hardware such as HP Proliant, Dell PowerEdge or equivalent.Be comfortable with working on call rotas and out of hours working as and when required to ensure uptime of service's requirements.Desirable skills:Working with PostgreSQL database.Administering Java based applications.Knowledge working with MVC frameworks such as Ruby on Rails.Experience with container technology.Rewards: We offer a highly competitive rewards and benefits package including pension, private healthcare, life cover and a gym subsidization.
          Software Development Engineer in Test - Folio - (Ipswich)   
SkillsRequirements:5+ yrs Java & Object Oriented Design/ProgrammingImplementation of 1 or more production RESTful interfaces in a microservices model2+ yrs product implementation experience with databases, both SQL and NoSQL ? PostgreSQL specifically is a plus2+ yrs product implementation experience in a cloud computing environment ? AWS specifically is a plus3+ yrs experience using Agile and/or SAFePreferred Qualifications:CI/CD using (eg) Jenkins, Maven, GradleSCM - Git/GitHubTest Driven Development (TDD) and Automated Unit TestingDeveloping automated integration and acceptance testsAutomating UI testing (eg Selenium, Sauce Labs)Developing performance and load tests at high scale (eg JMeter)General HTTP knowledge including familiarity with cURL or similar toolsLinux ? general knowledge, shell scripting ? RedHat/Amazon Linux specifically is a plusVirtualization ? Docker, Vagrant, etc.Open Source Software ? general knowledge SW dev model, experience contributing toRAML, JSON, XMLJavaScript and related tools/frameworks ? Both client-side and server-side - React, Node.js, webpack, npm/yarn, etc.Security related experience ?SSO, OAuth, SAML, LDAP, etc.Logging/Monitoring/Alerting/Analytics ? SumoLogic, Datadog, collectd, SNMP, JMX, etc.Why the North Shore of Boston and EBSCO are great places to live and work!Here at EBSCO we will provide relocation assistance to the best and brightest people. We are 45 minutes outside of Boston just minutes from the beach in Ipswich, MA. Ipswich is a part of the North Shore and contains a wide variety of locally owned shops, restaurants, and farms.
          Andy Wingo: encyclopedia snabb and the case of the foreign drivers   

Peoples of the blogosphere, welcome back to the solipsism! Happy 2017 and all that. Today's missive is about Snabb (formerly Snabb Switch), a high-speed networking project we've been working on at work for some years now.

What's Snabb all about you say? Good question and I have a nice answer for you in video and third-party textual form! This year I managed to make it to in lovely Tasmania. Tasmania is amazing, with wild wombats and pademelons and devils and wallabies and all kinds of things, and they let me talk about Snabb.

Click to download video

You can check that video on the youtube if the link above doesn't work; slides here.

Jonathan Corbet from LWN wrote up the talk in an article here, which besides being flattering is a real windfall as I don't have to write it up myself :)

In that talk I mentioned that Snabb uses its own drivers. We were recently approached by a customer with a simple and honest question: does this really make sense? Is it really a win? Why wouldn't we just use the work that the NIC vendors have already put into their drivers for the Data Plane Development Kit (DPDK)? After all, part of the attraction of a switch to open source is that you will be able to take advantage of the work that others have produced.

Our answer is that while it is indeed possible to use drivers from DPDK, there are costs and benefits on both sides and we think that when we weigh it all up, it makes both technical and economic sense for Snabb to have its own driver implementations. It might sound counterintuitive on the face of things, so I wrote this long article to discuss some perhaps under-appreciated points about the tradeoff.

Technically speaking there are generally two ways you can imagine incorporating DPDK drivers into Snabb:

  1. Bundle a snapshot of the DPDK into Snabb itself.

  2. Somehow make it so that Snabb could (perhaps optionally) compile against a built DPDK SDK.

As part of a software-producing organization that ships solutions based on Snabb, I need to be able to ship a "known thing" to customers. When we ship the lwAFTR, we ship it in source and in binary form. For both of those deliverables, we need to know exactly what code we are shipping. We achieve that by having a minimal set of dependencies in Snabb -- only LuaJIT and three Lua libraries (DynASM, ljsyscall, and pflua) -- and we include those dependencies directly in the source tree. This requirement of ours rules out (2), so the option under consideration is only (1): importing the DPDK (or some part of it) directly into Snabb.

So let's start by looking at Snabb and the DPDK from the top down, comparing some metrics, seeing how we could make this combination.

Code lines 61K 583K
Contributors (all-time) 60 370
Contributors (since Jan 2016) 32 240
Non-merge commits (since Jan 2016) 1.4K 3.2K

These numbers aren't directly comparable, of course; in Snabb our unit of code change is the merge rather than the commit, and in Snabb we include a number of production-ready applications like the lwAFTR and the NFV, but they are fine enough numbers to start with. What seems clear is that the DPDK project is significantly larger than Snabb, so adding it to Snabb would fundamentally change the nature of the Snabb project.

So depending on the DPDK makes it so that suddenly Snabb jumps from being a project that compiles in a minute to being a much more heavy-weight thing. That could be OK if the benefits were high enough and if there weren't other costs, but there are indeed other costs to including the DPDK:

  • Data-plane control. Right now when I ship a product, I can be responsible for the whole data plane: everything that happens on the CPU when packets are being processed. This includes the driver, naturally; it's part of Snabb and if I need to change it or if I need to understand it in some deep way, I can do that. But if I switch to third-party drivers, this is now out of my domain; there's a wall between me and something that running on my CPU. And if there is a performance problem, I now have someone to blame that's not myself! From the customer perspective this is terrible, as you want the responsibility for software to rest in one entity.

  • Impedance-matching development costs. Snabb is written in Lua; the DPDK is written in C. I will have to build a bridge, and keep it up to date as both Snabb and the DPDK evolve. This impedance-matching layer is also another source of bugs; either we make a local impedance matcher in C or we bind everything using LuaJIT's FFI. In the former case, it's a lot of duplicate code, and in the latter we lose compile-time type checking, which is a no-go given that the DPDK can and does change API and ABI.

  • Communication costs. The DPDK development list had 3K messages in January. Keeping up with DPDK development would become necessary, as the DPDK is now in your dataplane, but it costs significant amounts of time.

  • Costs relating to mismatched goals. Snabb tries to win development and run-time speed by searching for simple solutions. The DPDK tries to be a showcase for NIC features from vendors, placing less of a priority on simplicity. This is a very real cost in the form of the way network packets are represented in the DPDK, with support for such features as scatter/gather and indirect buffers. In Snabb we were able to do away with this complexity by having simple linear buffers, and our speed did not suffer; adding the DPDK again would either force us to marshal and unmarshal these buffers into and out of the DPDK's format, or otherwise to reintroduce this particular complexity into Snabb.

  • Abstraction costs. A network function written against the DPDK typically uses at least three abstraction layers: the "EAL" environment abstraction layer, the "PMD" poll-mode driver layer, and often an internal hardware abstraction layer from the network card vendor. (And some of those abstraction layers are actually external dependencies of the DPDK, as with Mellanox's ConnectX-4 drivers!) Any discrepancy between the goals and/or implementation of these layers and the goals of a Snabb network function is a cost in developer time and in run-time. Note that those low-level HAL facilities aren't considered acceptable in upstream Linux kernels, for all of these reasons!

  • Stay-on-the-train costs. The DPDK is big and sometimes its abstractions change. As a minor player just riding the DPDK train, we would have to invest a continuous amount of effort into just staying aboard.

  • Fork costs. The Snabb project has a number of contributors but is really run by Luke Gorrie. Because Snabb is so small and understandable, if Luke decided to stop working on Snabb or take it in a radically different direction, I would feel comfortable continuing to maintain (a fork of) Snabb for as long as is necessary. If the DPDK changed goals for whatever reason, I don't think I would want to continue to maintain a stale fork.

  • Overkill costs. Drivers written against the DPDK have many considerations that simply aren't relevant in a Snabb world: kernel drivers (KNI), special NIC features that we don't use in Snabb (RDMA, offload), non-x86 architectures with different barrier semantics, threads, complicated buffer layouts (chained and indirect), interaction with specific kernel modules (uio-pci-generic / igb-uio / ...), and so on. We don't need all of that, but we would have to bring it along for the ride, and any changes we might want to make would have to take these use cases into account so that other users won't get mad.

So there are lots of costs if we were to try to hop on the DPDK train. But what about the benefits? The goal of relying on the DPDK would be that we "automatically" get drivers, and ultimately that a network function would be driver-agnostic. But this is not necessarily the case. Each driver has its own set of quirks and tuning parameters; in order for a software development team to be able to support a new platform, the team would need to validate the platform, discover the right tuning parameters, and modify the software to configure the platform for good performance. Sadly this is not a trivial amount of work.

Furthermore, using a different vendor's driver isn't always easy. Consider Mellanox's DPDK ConnectX-4 / ConnectX-5 support: the "Quick Start" guide has you first install MLNX_OFED in order to build the DPDK drivers. What is this thing exactly? You go to download the tarball and it's 55 megabytes. What's in it? 30 other tarballs! If you build it somehow from source instead of using the vendor binaries, then what do you get? All that code, running as root, with kernel modules, and implementing systemd/sysvinit services!!! And this is just step one!!!! Worse yet, this enormous amount of code powering a DPDK driver is mostly driver-specific; what we hear from colleagues whose organizations decided to bet on the DPDK is that you don't get to amortize much knowledge or validation when you switch between an Intel and a Mellanox card.

In the end when we ship a solution, it's going to be tested against a specific NIC or set of NICs. Each NIC will add to the validation effort. So if we were to rely on the DPDK's drivers, we would have payed all the costs but we wouldn't save very much in the end.

There is another way. Instead of relying on so much third-party code that it is impossible for any one person to grasp the entirety of a network function, much less be responsible for it, we can build systems small enough to understand. In Snabb we just read the data sheet and write a driver. (Of course we also benefit by looking at DPDK and other open source drivers as well to see how they structure things.) By only including what is needed, Snabb drivers are typically only a thousand or two thousand lines of Lua. With a driver of that size, it's possible for even a small ISV or in-house developer to "own" the entire data plane of whatever network function you need.

Of course Snabb drivers have costs too. What are they? Are customers going to be stuck forever paying for drivers for every new card that comes out? It's a very good question and one that I know is in the minds of many.

Obviously I don't have the whole answer, as my role in this market is a software developer, not an end user. But having talked with other people in the Snabb community, I see it like this: Snabb is still in relatively early days. What we need are about three good drivers. One of them should be for a standard workhorse commodity 10Gbps NIC, which we have in the Intel 82599 driver. That chipset has been out for a while so we probably need to update it to the current commodities being sold. Additionally we need a couple cards that are going to compete in the 100Gbps space. We have the Mellanox ConnectX-4 and presumably ConnectX-5 drivers on the way, but there's room for another one. We've found that it's hard to actually get good performance out of 100Gbps cards, so this is a space in which NIC vendors can differentiate their offerings.

We budget somewhere between 3 and 9 months of developer time to create a completely new Snabb driver. Of course it usually takes less time to develop Snabb support for a NIC that is only incrementally different from others in the same family that already have drivers.

We see this driver development work to be similar to the work needed to validate a new NIC for a network function, with the additional advantage that it gives us up-front knowledge instead of the best-effort testing later in the game that we would get with the DPDK. When you add all the additional costs of riding the DPDK train, we expect that the cost of Snabb-native drivers competes favorably against the cost of relying on third-party DPDK drivers.

In the beginning it's natural that early adopters of Snabb make investments in this base set of Snabb network drivers, as they would to validate a network function on a new platform. However over time as Snabb applications start to be deployed over more ports in the field, network vendors will also see that it's in their interests to have solid Snabb drivers, just as they now see with the Linux kernel and with the DPDK, and given that the investment is relatively low compared to their already existing efforts in Linux and the DPDK, it is quite feasible that we will see the NIC vendors of the world start to value Snabb for the performance that it can squeeze out of their cards.

So in summary, in Snabb we are convinced that writing minimal drivers that are adapted to our needs is an overall win compared to relying on third-party code. It lets us ship solutions that we can feel responsible for: both for their operational characteristics as well as their maintainability over time. Still, we are happy to learn and share with our colleagues all across the open source high-performance networking space, from the DPDK to VPP and beyond.

          TNR Global Launches Search Application for Museum Collections   
We use open source search technology that works with most museum software systems and databases including the popular museum software product PastPerfect.
          On the Depth-Robustness and Cumulative Pebbling Cost of Argon2i, by Jeremiah Blocki and Samson Zhou   
Argon2i is a data-independent memory hard function that won the password hashing competition. The password hashing algorithm has already been incorporated into several open source crypto libraries such as libsodium. In this paper we analyze the cumulative memory cost of computing Argon2i. On the positive side we provide a lower bound for Argon2i. On the negative side we exhibit an improved attack against Argon2i which demonstrates that our lower bound is nearly tight. In particular, we show that (1) An Argon2i DAG is $\left(e,O\left(n^3/e^3\right)\right))$-reducible. (2) The cumulative pebbling cost for Argon2i is at most $O\left(n^{1.768}\right)$. This improves upon the previous best upper bound of $O\left(n^{1.8}\right)$ [Alwen and Blocki, EURO S&P 2017]. (3) Argon2i DAG is $\left(e,\tilde{\Omega}\left(n^3/e^3\right)\right))$-depth robust. By contrast, analysis of [Alwen et al., EUROCRYPT 2017] only established that Argon2i was $\left(e,\tilde{\Omega}\left(n^2/e^2\right)\right))$-depth robust. (4) The cumulative pebbling complexity of Argon2i is at least $\tilde{\Omega}\left( n^{1.75}\right)$. This improves on the previous best bound of $\Omega\left( n^{1.66}\right)$ [Alwen et al., EUROCRYPT 2017] and demonstrates that Argon2i has higher cumulative memory cost than competing proposals such as Catena or Balloon Hashing. We also show that Argon2i has high {\em fractional} depth-robustness which strongly suggests that data-dependent modes of Argon2 are resistant to space-time tradeoff attacks.
          eCube Systems Interviews VSI’s Brett Cameron on OpenVMS, Open Source and Developer Tools   

Brett Cameron talks about VSI's on-going evolution of OpenVMS back into the industry's most versatile and dominant OS platform

(PRWeb February 13, 2017)

Read the full story at

          New Kernels and Linux Foundation Efforts   
  • Four new stable kernels
  • Linux Foundation Launches Open Security Controller Project

    he Linux Foundation launched a new open source project focused on security for orchestration of multi-cloud environments.

    The Open Security Controller Project software will automate the deployment of virtualized network security functions — such as firewalls, intrusion prevention systems, and application data controllers — to protect east-west traffic inside the data center.

  • Open Security Controller: Security service orchestration for multi-cloud environments

    The Linux Foundation launched the Open Security Controller project, an open source project focused on centralizing security services orchestration for multi-cloud environments.

  • The Linux Foundation explains the importance of open source in autonomous, connected cars

    Open source computing has always been a major boon to the world of developers, and technology as a whole. Take Google's pioneering Android OS for example, based on the open source code, which can be safely credited with impacting the world of everyday technology in an unprecedented manner when it was introduced. It is, hence, no surprise when a large part of the automobile industry is looking at open source platforms to build on advanced automobile dynamics.

           Linux Rolls Out to Most Toyota and Lexus Vehicles in North America   

In his keynote presentation at Automotive Linux Summit, AGL director Dan Cauchy proudly announced that the 2018 Toyota Camry will offer an in-vehicle infotainment system based on AGL’s UCB.
The Linux Foundation

At the recent Automotive Linux Summit, held May 31 to June 2 in Tokyo, The Linux Foundation’s Automotive Grade Linux (AGL) project had one of its biggest announcements in its short history: The first automobile with AGLs open source Linux based Unified Code Base (UCB) infotainment stack will hit the streets in a few months.

In his ALS keynote presentation, AGL director Dan Cauchy showed obvious pride as he announced that the 2018 Toyota Camry will offer an in-vehicle infotainment (IVI) system based on AGL’s UCB when it debuts to U.S. customers in late summer. Following the debut, AGL will also roll out to most Toyota and Lexus vehicles in North America.

Read more

          Hackaday Prize Entry: Gaming Done Tiny with Keymu   

The world’s tiniest Game Boy Color, introduced at the 2016 Hackaday SuperConference, is a work of art. This microscopic game console inspired [c.invent] to create how own gaming handheld. His Keymu project on describes an open source, keychain-sized gaming handheld that its builder claims is really the world’s tiniest. How did he make it smaller? It’s a miniature Game Boy Advance SP, and it folds up in a handy clamshell case.

While he’s a Pi fan, [c.invent] felt the Pi Zero was too big and clunky for what he had in mind–a keychain-sized handheld. Only the Intel Edison was …read more

          [Inspiration] 3 techniques pour encadrer ses équipes comme à l'école 42   
Dans cette structure d'apprentissage "open source" cofondée par Xavier Niel, on cherche à répondre à la question : "Comment faire surgir le potentiel de génies du code ?" Voici trois méthodes à appliquer dans votre PME.
          Intriguing Interview With Matt Mullenweg By Japanese Magazine   
Intriguing interview conducted by which is a Japanese focused developer resource site. As your experience straddles both, where do you think open source excels? And where is it weak? The open source model is probably best in the world at bringing together hundreds of people, from casual passersby to (more...)
Cyberduck is an open source FTP (File Transfer Protocol), SFTP (SSH Secure File Transfer), WebDAV (Web-based Distributed Authoring and Versioning), Amazon S3, Google Cloud Storage, Windows Azure, Rackspace Cloud Files or Google Docs client …
LibreOffice is the power-packed free, libre and open source personal productivity suite for Windows, Macintosh and GNU/Linux, that gives you six feature-rich applications for all your document production and data processing …
          DevOps Engineer   
MD-New Windsor, Sr. DevOps Engineer Windsor, MD (Locals preferred) An in-person interview is a must. “US Citizen or those who are authorized to work for any employer without any sponsorship.“ Immediate opening for a Sr. DevOps Engineer for our client in their Windsor, MD area. It is 6 months Contract position. This individual must be able to master an multitude of open source platform and cloud technologies, incl
          WordPress Developers – How Do You Make A Living [Poll + Discussion]?   

The Question

I'd like to pose this question to all WordPress developers – plugin, theme, as well as core ones:

How do you make your living?

And, for clarification, by this I mean: "what are your primary sources of income?"

Open Source

Open source is a beautiful concept but it often comes with a price tag or, rather, the inverse price tag: most of the time you are not being paid for your time (of course, there are exceptions, such as companies hiring dedicated open source developers and keeping them on their direct payroll).

Everyone has to make a living, however, and everyone has their ways.

Developers can benefit from such income sources as:

          Представлен Nokia X2: следующее поколение линейки X   
Сегодня Microsoft Mobile OY (бывш. Nokia Devices & Services) представила второе поколение линейки Nokia X, модель Nokia X2 и усовершенствованный интерфейс платформы (на базе Android Open Source Project). При доступной цене (ориентировочная стоимость — 99 евро без учета налогов и сборов) новинка по сравнению с моделями первого поколения отличается рядом улучшений в аппаратном обеспечении, дизайне, […]
          Comment on Support Open Source by Potter   
I agree. For years, I have been enriched by ROS!
          A Wild & Disobedient Life   

Henry David Thoreau, on his 200th birthday, is an American immortal who got there the hard way – against the grain of his town and his times.  By now he’s the heroic non-conformist who modeled ...

The post A Wild & Disobedient Life appeared first on Open Source with Christopher Lydon.

          Tesla Home Battery Open Source: 5 reasons to love this project   

I was a super fan of Steve Jobs and how he disrupted the music and smart devices industries. Unfortunately he left us in October 5th 2011 and since then I haven’t been excited about any keynote and big announcement to date. My team and I would like to share with our community this keynote and […]

The post Tesla Home Battery Open Source: 5 reasons to love this project appeared first on OSVehicle #OpenSource Vehicle Open Motors.

          Print your own aquaponics garden with this open source urban farming system   

Aquapioneers has developed what it calls the world's first open source aquaponics kit in a bid to reconnect urban dwellers with the production of their food. Combining open source, digital fabrication, DIY, and urban farming, this startup's project aims to put the tools for zero-mile food into the hands of everyone.

          Comment on Reliably compromising Ubuntu desktops by attacking the crash reporter by Serious Ubuntu Linux desktop bugs found and fixed | on open source   
[…] Mint, you have a bug to patch. Donncha O’Cearbhaill, an Irish security researcher, found a remote execution bug in Ubuntu. This security hole, which first appeared in Ubuntu 12.10, makes it possible for malicious code to […]
          SUSE Manager 3.1 - krok w stronę środowisk DevOps i kontenerów   

Firma SUSE zaprezentowała SUSE Manager 3.1, najnowszą wersję w pełni otwartego oprogramowania open source do zarządzania infrastrukturą IT. Zwiększa ono wydajność pracy w środowiskach DevOps, ułatwia firmom wdrażanie kontenerów oraz administrowanie nimi. Narzędzie SUSE Manager jest kluczowym elementem rodziny rozwiązań SUSE do budowy infrastruktury definiowanej programowo (Software-Defined), pomocnym w dostosowaniu środowiska informatycznego do wymogów gospodarki cyfrowej i szybkim reagowaniu na potrzeby klientów.

          Apple updates OS X’s NTP server to address recently disclosed NTP vulnerabilities   
(LiveHacking.Com) – Apple has released a patch for OS X Mountain Lion v10.8.5, OS X Mavericks v10.9.5, and OS X Yosemite v10.10.1 to update the included NTP server to  fix the recently disclosed  vulnerabilities. The standard, open source Network Time Protocol (NTP) daemon (ntpd) contains multiple vulnerabilities which were publicly disclosed a few days ago. The vulnerabilities not only […]
          Dries Buytaert: Acquia's first decade: the founding story   

This week marked Acquia's 10th anniversary. In 2007, Jay Batson and I set out to build a software company based on open source and Drupal that we would come to call Acquia. In honor of our tenth anniversary this week, I wanted to share some of the milestones and lessons that have helped shape Acquia into the company it is today. I hope that my record of Acquia's history not only pays homage to our incredible colleagues, customers and partners that have made this journey worthwhile, but that it offers honest insight into the challenges and rewards of building a company from the ground up.

A Red Hat for Drupal

In 2007, I was attending the University of Ghent working on my PhD dissertation. At the same time, Drupal was gaining momentum; I will never forget when MTV called me seeking support for their new Drupal site. I remember being amazed that a brand like MTV, an institution I had grown up with, had selected Drupal for their website. I was determined to make Drupal successful and helped MTV free of charge.

It became clear that for Drupal to grow, it needed a company focused on helping large organizations like MTV be successful with the software. A "Red Hat for Drupal", as it were. I also noticed that other open source projects, such as Linux had benefitted from well-capitalized backers like Red Hat and IBM. While I knew I wanted to start such a company, I had not yet figured out how. I wanted to complete my PhD first before pursuing business. Due to the limited time and resources afforded to a graduate student, Drupal remained a hobby.

Little did I know that at the same time, over 3,000 miles away, Jay Batson was skimming through a WWII Navajo Code Talker Dictionary. Jay was stationed as an Entrepreneur in Residence at North Bridge Venture Partners, a venture capital firm based in Boston. Passionate about open source, Jay realized there was an opportunity to build a company that provided customers with the services necessary to scale and succeed with open source software. We were fortunate that Michael Skok, a Venture Partner at North Bridge and Jay's sponsor, was working closely with Jay to evaluate hundreds of open source software projects. In the end, Jay narrowed his efforts to Drupal and Apache Solr.

If you're curious as to how the Navajo Code Talker Dictionary fits into all of this, it's how Jay stumbled upon the name Acquia. Roughly translating as "to spot or locate", Acquia was the closest concept in the dictionary that reinforced the ideals of information and content that are intrinsic to Drupal (it also didn't hurt that the letter A would rank first in alphabetical listings). Finally, the similarity to the world "Aqua" paid homage to the Drupal Drop; this would eventually provide direction for Acquia's logo.

Breakfast in Sunnyvale

In March of 2007, I flew from Belgium to California to attend Yahoo's Open Source CMS Summit, where I also helped host DrupalCon Sunnyvale. It was at DrupalCon Sunnyvale where Jay first introduced himself to me. He explained that he was interested in building a company that could provide enterprise organizations supplementary services and support for a number of open source projects, including Drupal and Apache Solr. Initially, I was hesitant to meet with Jay. I was focused on getting Drupal 5 released, and I wasn't ready to start a company until I finished my PhD. Eventually I agreed to breakfast.

Over a baguette and jelly, I discovered that there was overlap between Jay's ideas and my desire to start a "RedHat for Drupal". While I wasn't convinced that it made sense to bring Apache Solr into the equation, I liked that Jay believed in open source and that he recognized that open source projects were more likely to make a big impact when they were supported by companies that had strong commercial backing.

We spent the next few months talking about a vision for the business, eliminated Apache Solr from the plan, talked about how we could elevate the Drupal community, and how we would make money. In many ways, finding a business partner is like dating. You have to get to know each other, build trust, and see if there is a match; it's a process that doesn't happen overnight.

On June 25th, 2007, Jay filed the paperwork to incorporate Acquia and officially register the company name. We had no prospective customers, no employees, and no formal product to sell. In the summer of 2007, we received a convertible note from North Bridge. This initial seed investment gave us the capital to create a business plan, travel to pitch to other investors, and hire our first employees. Since meeting Jay in Sunnyvale, I had gotten to know Michael Skok who also became an influential mentor for me.

Wired interview
Jay and me on one of our early fundraising trips to San Francisco.

Throughout this period, I remained hesitant about committing to Acquia as I was devoted to completing my PhD. Eventually, Jay and Michael convinced me to get on board while finishing my PhD, rather than doing things sequentially.

Acquia, my Drupal startup

Soon thereafter, Acquia received a Series A term sheet from North Bridge, with Michael Skok leading the investment. We also selected Sigma Partners and Tim O'Reilly's OATV from all of the interested funds as co-investors with North Bridge; Tim had become both a friend and an advisor to me.

In many ways we were an unusual startup. Acquia itself didn't have a product to sell when we received our Series A funding. We knew our product would likely be support for Drupal, and evolve into an Acquia-equivalent of the RedHat Network. However, neither of those things existed, and we were raising money purely on a PowerPoint deck. North Bridge, Sigma and OATV mostly invested in Jay and I, and the belief that Drupal could become a billion dollar company that would disrupt the web content management market. I'm incredibly thankful for Jay, North Bridge, Sigma and OATV for making a huge bet on me.

Receiving our Series A funding was an incredible vote of confidence in Drupal, but it was also a milestone with lots of mixed emotions. We had raised $7 million, which is not a trivial amount. While I was excited, it was also a big step into the unknown. I was convinced that Acquia would be good for Drupal and open source, but I also understood that this would have a transformative impact on my life. In the end, I felt comfortable making the jump because I found strong mentors to help translate my vision for Drupal into a business plan; Jay and Michael's tenure as entrepreneurs and business builders complimented my technical strength and enabled me to fine-tune my own business building skills.

In November 2017, we officially announced Acquia to the world. We weren't ready but a reporter had caught wind of our stealth startup, and forced us to unveil Acquia's existence to the Drupal community with only 24 hours notice. We scrambled and worked through the night on a blog post. Reactions were mixed, but generally very supportive. I shared in that first post my hopes that Acquia would accomplish two things: (i) form a company that supported me in providing leadership to the Drupal community and achieving my vision for Drupal and (ii) establish a company that would be to Drupal what Ubuntu or RedHat were to Linux.

Acquia com march
An early version of, with our original logo and tagline. March 2008.

The importance of enduring values

It was at an offsite in late 2007 where we determined our corporate values. I'm proud to say that we've held true to those values that were scribbled onto our whiteboard 10 years ago. The leading tenant of our mission was to build a company that would "empower everyone to rapidly assemble killer websites".

Acquia vision

In January 2008, we had six people on staff: Gábor Hojtsy (Principal Acquia engineer, Drupal 6 branch maintainer), Kieran Lal (Acquia product manager, key Drupal contributor), Barry Jaspan (Principal Acquia engineer, Drupal core developer) and Jeff Whatcott (Vice President of Marketing). Because I was still living in Belgium at the time, many of our meetings took place screen-to-screen:

Typical work day

Opening our doors for business

We spent a majority of the first year building our first products. Finally, in September of 2008, we officially opened our doors for business. We publicly announced commercial availability of the Acquia Drupal distribution and the Acquia Network. The Acquia Network would offer subscription-based access to commercial support for all of the modules in Acquia Drupal, our free distribution of Drupal. This first product launched closely mirrored the Red Hat business model by prioritizing enterprise support.

We quickly learned that in order to truly embrace Drupal, customers would need support for far more than just Acquia Drupal. In the first week of January 2009, we relaunched our support offering and announced that we would support all things related to Drupal 6, including all modules and themes available on as well as custom code.

This was our first major turning point; supporting "everything Drupal" was a big shift at the time. Selling support for Acquia Drupal exclusively was not appealing to customers, however, we were unsure that we could financially sustain support for every Drupal module. As a startup, you have to be open to modifying and revising your plans, and to failing fast. It was a scary transition, but we knew it was the right thing to do.

Building a new business model for open source

Exiting 2008, we had launched Acquia Drupal, the Acquia Network, and had committed to supporting all things Drupal. While we had generated a respectable pipeline for Acquia Network subscriptions, we were not addressing Drupal's biggest adoption challenges; usability and scalability.

In October of 2008, our team gathered for a strategic offsite. Tom Erickson, who was on our board of directors, facilitated the offsite. Red Hat's operational model, which primarily offered support, had laid the foundation for how companies could monetize open source, but were convinced that the emergence of the cloud gave us a bigger opportunity and helped us address Drupal's adoption challenges. Coming out of that seminal offsite we formalized the ambitious decision to build Drupal Gardens and Acquia Hosting. Here's why these two products were so important:

Solving for scalability: In 2008, scaling Drupal was a challenge for many organizations. Drupal scaled well, but the infrastructure companies required to make Drupal scale well was expensive and hard to find. We determined that the best way to help enterprise companies scale was by shifting the paradigm for web hosting from traditional rack models to the then emerging promise of the "cloud".

Solving for usability: In 2008, CMSs like Wordpress and Ning made it really easy for people to start blogging or to set up a social network. At the time, Drupal didn't encourage this same level of adoption for non-technical audiences. Drupal Gardens was created to offer an easy on-ramp for people to experience the power of Drupal, without worrying about installation, hosting, and upgrading. It was one of the first times we developed an operational model that would offer "Drupal-as-a-service".

Acquia roadmap

Fast forward to today, and Acquia Hosting evolved into Acquia Cloud. Drupal Gardens evolved into Acquia Cloud Site Factory. In 2008, this product roadmap to move Drupal into the cloud was a bold move. Today, the Cloud is the starting point for any modern digital architecture. By adopting the Cloud into our product offering, I believe Acquia helped establish a new business model to commercialize open source. Today, I can't think of many open source companies that don't have a cloud offering.

Tom Erickson takes a chance on Acquia

Tom joined Acquia as an advisor and a member of our Board of Directors when Acquia was founded. Since the first time I met Tom, I always wanted him to be an integral part of Acquia. It took some convincing, but Tom eventually agreed to join us full time as our CEO in 2009. Jay Batson, Acquia's founding CEO, continued on as the Vice President at Acquia responsible for incubating new products and partnerships.

Moving from Europe to the United States

In 2010, after spending my entire life in Antwerp, I decided to move to Boston. The move would allow me to be closer to the team. A majority of the company was in Massachusetts, and at the pace we were growing, it was getting harder to help execute our vision all the way from Belgium. I was also hoping to cut down on travel time; in 2009 flew 100,000 miles in just one year (little did I know that come 2016, I'd be flying 250,00 miles!).

This is a challenge that many entrepreneurs face when they commit to starting their own company. Initially, I was only planning on staying on the East Coast for two years. Moving 3,500 miles away from your home town, most of your relatives, and many of your best friends is not an easy choice. However, it was important to increase our chances of success, and relocating to Boston felt essential. My experience of moving to the US had a big impact on my life.

Building the universal platform for the world's greatest digital experiences

Entering 2010, I remember feeling that Acquia was really 3 startups in one; our support business (Acquia Network, which was very similar to Red Hat's business model), our managed cloud hosting business (Acquia Hosting) and Drupal Gardens (a based on Drupal). Welcoming Tom as our CEO would allow us to best execute on this offering, and moving to Boston enabled me to partner with Tom directly. It was during this transformational time that I think we truly transitioned out of our "founding period" and began to emulate the company I know today.

The decisions we made early in the company's life, have proven to be correct. The world has embraced open source and cloud without reservation, and our long-term commitment to this disruptive combination has put us at the right place at the right time. Acquia has grown into a company with over 800 employees around the world; in total, we have 14 offices around the globe, including our headquarters in Boston. We also support an incredible roster of customers, including 16 of the Fortune 100 companies. Our work continues to be endorsed by industry analysts, as we have emerged as a true leader in our market. Over the past ten years I've had the privilege of watching Acquia grow from a small startup to a company that has crossed the chasm.

With a decade behind us, and many lessons learned, we are on the cusp of yet another big shift that is as important as the decision we made to launch Acquia Field and Gardens in 2008. In 2016, I led the project to update Acquia's mission to "build the universal platform for the world's greatest digital experiences". This means expanding our focus, and becoming the leader in building digital customer experiences. Just like I openly shared our roadmap and strategy in 2009, I plan to share our next 10 year plan in the near future. It's time for Acquia to lay down the ambitious foundation that will enable us to be at the forefront of innovation and digital experience in 2027.

A big thank you

Of course, none of these results and milestones would be possible without the hard work of the Acquia team, our customers, partners, the Drupal community, and our many friends. Thank you for all your hard work. After 10 years, I continue to love the work I do at Acquia each day — and that is because of you.

          Drupal Association blog: The Drupal Association has selected a partner to help us maintain the infrastructure   

The Drupal AssociationEarlier this year the Drupal Association put out a Request for Proposals to find a Managed Infrastructure services partner. If successful, the goal of the RFP was to find a partner who could manage the underlying infrastructure of, our sub-sites, and services, so that the internal Drupal Association Engineering team can focus their efforts on work that directly serves our mission.

For an initiative of this scope and financial impact, our policies require a competitive bidding process with no fewer than three vendors. We released a formal Request for Information on March 9th, and sent qualified respondents the full RFP. While the number of proposals received and the details of those proposals are confidential, per our policy, we were pleased to receive quite a few letters of interest and proposals from a variety of organizations. Each of the responding organizations were respected contributors to the community, with experience managing Drupal infrastructure at scale.

Ensuring a fair process was important to us. We assembled a committee to review these proposals made up of members of the Drupal Association leadership and engineering team, as well as trusted volunteers who have helped us to maintain the infrastructure. If any employee, contractor, volunteer, or member of the board had material ties to any of the participating organizations, they were recused from this decision making process.

We evaluated each of these proposals and scheduled interviews with the responding organizations. Each proposal was evaluated based on:

  • Infrastructure management expertise

  • Drupal hosting experience

  • Non-Drupal service expertise

  • Familiarity with the Drupal project and the Drupal Association

  • Prior experience with infrastructure

  • Contribution history

  • Proposed SLA

After interviews and deliberation, we're pleased to announced that we've selected Tag1 Consulting as our Managed Infrastructure Services partner.

Tag1 Consulting

Tag 1 brings tremendous experience in Drupal infrastructure management, performance, and scalability, as well as a team with a history of contribution both to the Drupal project, and to

We've begun working with Tag1 on an audit of our current infrastructure and hand-off of management responsibilities. At this time infrastructure will still be hosted at the Oregon State University Open Source Lab, and Tag1 will help us manage any future data center transition if it becomes necessary.

We're pleased to be moving forward in partnership with Tag1, and to be able to focus our internal efforts on continued improvements to the services we provide for the community.

          Jaspersoft Studio 6.3.2   
A new, free, open source report designer
          astTECS Launches Open Source CRM   
[:en] *astTECS Launches Open Source CRM Channel Ref: Date : March 16, 2017   “Leveraging expertise in open source, our objective is to provide customers with best-of-breed enterprise grade applications, while focusing mainly on the must-have features of a customer relationship management solution and major workflows” Mr. Manjunath R J, CEO, *astCRM’ *astTECS,...
          Artificial Intelligence Python Software Engineer   
MA-Lexington, Solidus is searching for an Artificial Intelligence Python Software Engineer. The candidate will develop AI systems for large multi-sensor and open source data sets. Projects involve system design and architecture and the development of algorithms for machine learning, computer vision, natural language processing, and graph analytics implemented on enterprise big data architectures. The candidate

GTA V PC Enhanced Native Trainer, Construction on Alexander Blade’s original sample. This is open source and hosted on GitHub, which and the thread is the main source of updates.

          jTDS JDBC Driver 1.2.8 and 1.3.1 released   

The jTDS project has released version 1.2.8 and 1.3.1 of the open source JDBC driver for Microsoft SQL Server and Sybase.

New features:
o Backported Kerberos support from jTDS 2.0 (jTDS 1.3 only).

Bug fixes:
o #702, ConnectionJDBC2.getMutex() isn't thread-safe.
o #687, incompatible property name mappings.
o #699, conversion from timestamp String to Date fails.
o #695, trailing line comment breaking connection state on metadata retrieval.
o #694, jTDS logger may cause an NPE.
o #508, the driver ignored unspecified server errors.
o #682, error when using date/time escapes in a procedure call.
o #626, missing type mapping for java.math.BigInteger.
o #683, MSSQL 2000 compatibility issue.
o #552, jTDS may fail to find an SQL Server instance.

          jTDS JDBC Driver 1.2.7 and 1.3.0 released   
The jTDS project has released version 1.2.6 and 1.3.0 of the open source JDBC driver for Microsoft SQL Server and Sybase. The new versions are a quite huge update, also fixing a number of *critical* bugs and improving the performance, so we'd highly encourage you to upgrade. Version 1.3.0 is the first Java 7 compatible version of the driver and, beside the fix for bug #672, improves performance compared to version 1.2.7. The future development will be focused on the 1.3.x line of the driver, so further enhancements and bug fixes will not necessarily become available it the Java 1.3 compatible jTDS 1.2.x. You should only stick to jTDS 1.2 if you require to use a Java version prior to Java 7. Bug fixes (jTDS 1.3.0 only): o #672, the driver now uses a real MAC address, if available. Bug fixes: o #528, any ResultSet gets dropped when using RETURN_GENERATED_KEYS. o Fixed a bug that caused Statement.close() to throw an exception for errors caused by previously executed statements. o #609, slow SharedSocket finalization due to expensive locking. o Fixed format conversion errors for DATETIME, DATE and TIME values. o #615, SQL parser fails if function escapes contain nested functions. o #634, incorrect comment processing in callable statements. o Fixed parameter name format for procedure calls using named parameters. o #647, preparing statements including a WITH clause don't retrieve meta data. o #677, possible deadlock in JtdsStatement.close(). o #676, error in SQL parser concerning multi line comments. o #541, data type mismatch when using {ts}/{t}/{d} JDBC escapes. o A number of incorrect error messages have been fixed. o #637, an attempt to execute a standard SQL statement using a CallableStatement now throws an appropriate SQLException on preparation. o #633, possible NPEs in JtdsObjectFactory. o Added missing getter/setter methods for connection property "autoCommit" in class JtdsDataSource. o Corrected data types for connection properties "autoCommit" and "useNTLMv2". o Added missing default for connection property "useNTLMv2". o #661, memory pollution caused by ThreadLocal Calendar instances. o #673, buffer overflow in SQL parser. o #643, documentation error. o #659, missing service provider configuration file for JDBC driver class. o #656, unnecessary log pollution during emulated XA recovery. o #667, spurious login timeouts if establishing connections concurrently. o #642, a stream hasn't been closed in CharsetInfo. o #608, various typos in o #660, problems with WebRing code of the project website. o Fixed a race condition when closing a Statement in concurrent threads.
          jTDS JDBC Driver 1.2.5 released    
The jTDS Project has released version 1.2.5 of the open source JDBC driver for Microsoft SQL Server and Sybase. jTDS 1.2.5 is a maintenance release, correcting a number of bugs and adding a few minor features. This release restores backward compatibility with Java 1.3. Bug fixes: o [2900139], NoClassDefFoundError with C3P0. o [2856350]/[2898905], problems with JDBC4 stubs in GlassFish. o [2911266], default response timeout of 30 seconds if using named pipes. o [2892493], NullPointException when receiving character NULL values. o [2891775], fix for bug [2340241] has broken Java 1.3 compatibility. o [2883066], Numeric overflow in conversion BigInteger/BIGINT. o [2871274], no soft kill for TimerThread. o Corrected a bug that lead to login errors being masked by later exceptions. jTDS is the most performant JDBC driver for both Microsoft SQL Server and Sybase. It is a complete implementation of JDBC 3.0, it passes the J2EE 1.3 certification and Hibernate test suites and is the preferred SQL Server/Sybase driver for JBoss, Hibernate, Atlassian JIRA and Confluence, DbVisualizer and Compiere. For more information on jTDS see The release can be downloaded from:
          jTDS JDBC Driver 1.2.4 released    
[The Sourceforge news system has been unavailable for some time, so this announcement has not been posted earlier. jTDS 1.2.4 has already been uploaded to Sourceforge on September 29 2009.] The jTDS Project has released version 1.2.4 of the open source JDBC driver for Microsoft SQL Server and Sybase. jTDS 1.2.4 is a hotfix release, correcting critical bug [2860742]. Bug fixes: o [2860742], getByte() causes overflow error for negative values. o [2856350], JDBC4 method stubs make jTDS unusable. Make sure to check the changelog for detailed listings of the bugfixes and new features. jTDS is the most performant JDBC driver for both Microsoft SQL Server and Sybase. It is a complete implementation of JDBC 3.0, it passes the J2EE 1.3 certification and Hibernate test suites and is the preferred SQL Server/Sybase driver for JBoss, Hibernate, Atlassian JIRA and Confluence, DbVisualizer and Compiere. For more information on jTDS see: The release can be downloaded from:
          jTDS JDBC Driver 1.2.3 released   
The jTDS Project has released version 1.2.3 of the open source JDBC driver for Microsoft SQL Server and Sybase. jTDS 1.2.3 is a bugfix release, improving upon jTDS 1.2.2. A few new features also made it into the release. New features: o [2340241] process ID can be passed as connection property o [1778933] added support for socket keep-alive feature Bug fixes: o [2814376] varchar-type is truncated in non-unicode environment o [2349058] DateTime allows invalid dates through o [2181003] attempt to set a BC date invalidates internal state o [2675463] jTDS returns database users instead of schemas o [1855125] jTDS silently ignores integer overflows o [1755448] login failure leaves unclosed sockets o [1845477] missing license info o [1955499] performance problems with timestamps in multi-threaded applications o [1793584] login timeout canceled too early o [1802986] incorrect charset mapping between 'MAC' and 'ISO-8859-1' o [1957748] jTDS is leaking memory o [2508201] date values are changed by 3 milliseconds o [2796385] jTDS is running out of UDP sockets o [1869156] jTDS is leaking memory o [2021839] data truncation problem o [1811383] ArrayIndexOutOfBounds on executeBatch o [2021839] savepoint starts 2 transactions if it's the first operation o [1843801] infinite loop if DB connection dies during a batch o [2818256] a savepoint is invalid after rollback o [1883905] unintentional infinite wait Make sure to check the changelog for detailed listings of the bugfixes and new features. jTDS is the most performant JDBC driver for both Microsoft SQL Server and Sybase. It is a complete implementation of JDBC 3.0, it passes the J2EE 1.3 certification and Hibernate test suites and is the preferred SQL Server/Sybase driver for JBoss, Hibernate, Atlassian JIRA and Confluence, DbVisualizer and Compiere. For more information on jTDS see: The release can be downloaded from:
          jTDS JDBC Driver 1.2.2 released   
Open source JDBC 3.0 Type 4 driver for Microsoft SQL Server (6.5, 7.0, 2000 and 2005) and Sybase. jTDS is the fastest JDBC driver for MS SQL Server and is a complete implementation of the JDBC spec. For more information see The jTDS Project has released version 1.2.2 of the open source JDBC driver for Microsoft SQL Server and Sybase. jTDS 1.2.2 is a bugfix release, improving over upon jTDS 1.2.1. A few new features also made it into the release. New features: o 64-bit support for Single Sign On (SSO) o [1491811] sqlState code for snapshot conflict. Bug fixes: o [1774322] Sybase nulled text fields return not null o [1592113] NTLMv2 properties on datasource Make sure to check the changelog for detailed listings of the bugfixes and new features. jTDS is the most performant JDBC driver for both Microsoft SQL Server and Sybase. It is a complete implementation of JDBC 3.0, it passes the J2EE 1.3 certification and Hibernate test suites and is the preferred SQL Server/Sybase driver for JBoss, Hibernate, Atlassian JIRA and Confluence, DbVisualizer and Compiere. For more information on jTDS see: The release can be downloaded from:
          jTDS JDBC Driver 1.2.1 released   
The jTDS Project has released version 1.2.1 of the open source JDBC driver for Microsoft SQL Server and Sybase. jTDS 1.2.1 is a bugfix release, improving over the very successful jTDS 1.2. A few new features also made it into the release. New features: o Support for specifying the bind address o Support for SNAPSHOT transaction isolation o NTLMv2 authentication o Support for specifying the disk buffer directory. Bug fixes (partial list): o Statement memory leak o SQLException thrown in CallableStatement SetByte()/setDouble()/setFloat() o Single blank space returned as empty string o Execute batch returning incorrect counts o Default named pipe path for Sybase o Named pipe connections across domains o Concurrent batch update failure o executeQuery absorbs thread interrupt status Make sure to check the changelog for detailed listings of the bugfixes and new features. jTDS is the most performant JDBC driver for both Microsoft SQL Server and Sybase. It is a complete implementation of JDBC 3.0, it passes the J2EE 1.3 certification and Hibernate test suites and is the preferred SQL Server/Sybase driver for JBoss, Hibernate, Atlassian JIRA and Confluence, DbVisualizer and Compiere. For more information on jTDS see The release can be downloaded from:
          jTDS JDBC Driver 1.2 released   
The jTDS Project has released version 1.2 of the open source JDBC driver for Microsoft SQL Server and Sybase. jTDS 1.2 is a major bugfix release, improving over the very successful jTDS 1.1. A few new features, such as support for Sybase ASE 15, SQL Server 2005 and improved exceptions also made it into the release. New features: o Support for Sybase ASE 15 o Improved support for SQL Server 2005 varchar(max) and varbinary(max) o Complete handling of cursor exceptions and downgrading o Better handling of cancels and timeouts o Configurable socket timeout o Subclasses of basic JDBC types recognized as setObject() values Major bug fixes (out of over 30 fixes): o Statement pool memory leak o Java 1.5 BigDecimal problems o Possible synchronization problems o setAutoCommit() behavior not according to specification o getTimestamp() returns invalid value after calling getString() o Cursor opens fails when cursor threshold <> -1 o iso_1 charset and Sybase o "All pipe instances are busy" not handled properly o SSL fails with SQL Server 2005 o Sybase: insert UTF8 string fails when length is 255 Make sure to check the changelog for detailed listings of the bugfixes and new features. jTDS is the most performant JDBC driver for both Microsoft SQL Server and Sybase. It is a complete implementation of JDBC 3.0, it passes the J2EE 1.3 certification and Hibernate test suites and is the preferred SQL Server/Sybase driver for JBoss, Hibernate, Atlassian JIRA and Confluence, DbVisualizer and Compiere. For more information on jTDS see The release can be downloaded from:
          jTDS JDBC Driver 1.1 released   
The jTDS Project has released version 1.1 of the open source JDBC driver for Microsoft SQL Server and Sybase. jTDS 1.1 is a major feature release, notable new features including much improved statement caching, configurable metadata caching, optimistic/pessimistic locking support and fast forward-only cursors. Other major changes are the switch to sp_prepare as default prepare method for prepared statements instead of temporary stored procedures -- which means better performance and no more depending on transaction rollbacks -- and optimistic concurrency instead of row locks on default updatable result sets. Make sure to check the jTDS FAQ for detailed explanations of the new features and new defaults. Other new features: o Configurable mapping of large types to LOBs or standard Java types o Extended scrollability and updatability options o byte[] to String conversions now generate hex values o Control over memory/disk buffering o Optimized handling of date/time values o Complete SQLException chaining Bug fixes: o absolute() and relative() with larger than row count values o cancel() synchronization o 'Hidden' columns visible with prepared statements o Deadlocking with c3p0 due to thread interrupt flag being set o BigDecimal to String conversion dropping insignificant trailing zeroes o updateRow() reset the position to the beginning of the block o Execution failed if statement could not be prepared o Deadlocking when parsing an unterminated multi-line comment o Sybase getProcedureColumns bug o Blob/Clob position methods failed jTDS is the most performant JDBC driver for both Microsoft SQL Server and Sybase. It is a complete implementation of JDBC 3.0, it passes the J2EE 1.3 certification and Hibernate test suites and is the preferred SQL Server/Sybase driver for JBoss, Hibernate, Atlassian JIRA and Confluence, DbVisualizer and Compiere. For more information on jTDS see The release can be downloaded from:
Apply now Save job Print Share 27-Jun-17 jobsDB ref: JID200003001681489 Minimal lulusan S1 Perpustakaan Pengalaman kerja min. 2 tahun sebagai Pustakawan di Sekolah Dasar Internasional lebih diutamakan Memiliki kemampuan untuk memahami teks dalam Bahasa Inggris Memiliki minat membaca dan minat terhadap buku Memiliki kemampuan untuk menggunakan aplikasi sistem manajamen perpustakaan atau repositori open source Memiliki kemampuan untuk bekerja secara mandiri atau tim Memiliki kemampuan Bahasa Inggris yang sederhana untuk melakukan percakapan sehari-hari Tekun dan mempunyai minat belajar yang ting
          How to setup Samba server for file sharing with Windows client   
This post describes about how to setup Samba server for file sharing with windows client. Samba is a freeware open source software suite that can be used for file sharing and print services. Using the SMB protocol we can share files across windows client.
          Milky tracker 0.90.85   
MilkyTracker is an open source, multi-platform music application for creating .MOD and .XM module files.
          Top Android Apps June 2017   

Looking for some of the best apps? Here is our Pick for the Top 10 Android Apps June 2017. Flick Launcher For those of you who love experimenting with new launchers flick launcher is now another choice. It’s still an alpha so expect plenty of bugs but it’s open source and inspired by the pixel [...]

The post Top Android Apps June 2017 appeared first on Internetseekho.

          Free and Open Source Programs on the Mac   
Last week I officially bacame an Apple user with the Mac Mini. As a new Mac user, I was excited and immediately set about looking for equivalent Windows softwares for Mac. Here are a list of sites to start looking for Mac softwares: Apple Download – Look for Freeware in the list. A filter for […]
          Alternative Disk/Partition Cloning Tools   
My question today is ‘What are the open source/free alternatives to disk/partition cloning tools like Norton Ghost, Acronis True Image or Paragon Drive Backup.” The selection has certainly improved and matured since I research this topic more than 2 years ago. Back then a few alternatives (including SystemRescueCD and PartD, Partition Saving) was tested but […]
          Think Camp - Smart Country: Mediathek, Hauptstraße 113, 7521 Bildein   
Am Samstag, 1. Juli 2017 von 9.30 bis 18 Uhr und Sonntag 2. Juli 2017, 9 bis 17 Uhr findet in der MEDIATHEK in Bildein, Hauptstraße 113 das Think Camp "Smart Country" statt. DIE FORMEL DES WANDELS Wir werden in einer Zeit des steigenden Wettbewerbs, der schwindenden Ressourcen und der ökonomischen Krisen die Zukunft nur bewältigen, wenn wir "Smart" werden. "Smart" heißt mehreres gleichzeitig: Neue Technologien, weniger Verschwendung, bessere Koordination und Kommunikation, sowohl der Menschen als auch der Werkzeuge miteinander. Allerdings brauchen wir diese "Smartness" nicht nur in den "Smart Cities" für die es inzwischen internationale Konferenzen und Forschungsverbände gibt. Wir brauchen auch und gerade "Smart Country", die Attraktivitätssteigerung des ländlichen Raumes durch soziale und technische Innovationen, die ihn lebenswerter, anziehender und jünger machen. Das Südburgenland ist eine solche einladende Region, in der zukunftsfähige Entwicklungen eine große Chance haben. Allerdings nicht im Alleingang, sondern in smarter Kombination. Anlässlich eines Besuches einer internationalen Gruppe von Designern und Regionalentwicklern, die sich von Arkadien (Griechenland) bis Wien mit neuen "Being Spaces" für eine neue Form des mobilen und regionsorientierten Lernens einsetzen (UnaVision/UnaVersity), laden wir zu einer gemeinsamen Denkwerkstatt ein. ANGESTREBTES ZIEL Liveentwicklung einer Zukunftslandkarte „Nächste reale Schritte planen, nicht im Träumen stecken bleiben.“ • Eine Plattform für innovative Menschen in der Region schaffen • Drehscheibe für jene, die mit genialen Ideen und unkonventionellen Projekten „Zukunft“ aufs Land bringen DREHORT FÜR KINOFILM Filmaufnahmen während der Veranstaltung für den Open Source Film „Smart Country“ von Produzent Stephan Kanduth PROGRAMM SÜDBURGENLAND - PUNITZ/KIRCHFIDISCH/BILDEIN Freitag, 30. Juni 2017, 18 Uhr PUNITZ/KIRCHFIDISCH • ANKUNFT Cessna Kennung OE-DVK in Punitz • Dreharbeiten im OpenLandLab in Kirchfidisch • Präsentation Projekt „Pegasus Trimaran“ (Entwickler/Konstrukteur Christian ALEXANDER) Samstag, 1. Juli 2017, 9.30 bis 13 Uhr, 14.30 bis 18 Uhr BILDEIN/MEDIATHEK • VORSTELLUNG OpenLandLAB Leopold Zyka • KENNELERNEN UNTER DEN TEILNEHMERN • KEYNOTE Vom Global Marshall Plan zu UnaVision Johannes Pfister Das Problem und die Chancen • KEYNOTE Dörfer mit Zukunft Franz Nahrada • IMPULS Smart Regions sind mehr als Smart Cities Ramesh Biswas • IMPULS AGZ Arbeitgeberzusammenschlüsse Franz Heumayr • MINIWORKSHOP IN KLEINGRUPPEN Wo drückt der Schuh (am Land) Neue Werkzeuge und neue Technologien • Wie starte ich ein OTELO? Robert Fabian und Harald Unterhuber Vom Reparaturcafé bis zum 4D Drucker • KEYNOTE IoT Austria, Industrie meets Makers, Silkroad 4.0 Über Industrie und Maker bis Internet-of-Things ausgestattete Motorräder: Österreichische Initiativen erobern die Welt bis nach China! Phillipe Reinisch • Wunderwuzzi Roboter in der Schule Erkin Bayirli Neue Lebensräume und Perspektiven • KEYNOTE Wie das Land in ein Paradies verwandeln? Bernhard Harrer • Keine neue Bauweise, eine neue Lebensweise tut Not Architekt Ewald Onzek • Mehrgeschossig gemeinschaftlich selbst bauen mit Holz, Lehm und Stroh Institute for Convivial Practices • Kreative Dörfer und neue Landtechniken Waclaw Idziak (Polen) OPEN SPACE - Impulsverarbeitung in Kleingruppen Samstag, 1. Juli 2017, 19 Uhr BILDEIN/MEDIATHEK Filmvorführung DIE ZUKUNFT IST BESSER ALS IHR RUF Überall hören wir von Krisen, Medien schüren Verunsicherung. Wie reagieren wir darauf? Sechs Beispiele erzählen von der Möglichkeit, den Lauf der Dinge doch selbst mitzugestalten. - Filmlänge 85 min. Im Anschluss Diskussion zum Film und Landparty (Open End) Sonntag, 2. Juli 2017, 9 bis 13 Uhr, 14.30 bis 17 Uhr BILDEIN/MEDIATHEK Erfolgsgeschichten und Ermutigung Lightning Talks & FlashLights - OFFENES MIKROPHON FÜR ALLE • Modelle des Wandels Pioneers of Change • Ein Startup im Südburgenland: Mit Permakultur zum Selbstversorger werden – Jesko Schneider • Wunderwerkstoff Holz Alfred Ruhdorfer ECOFORMA Sarleinsbach • Mz* Baltazar’s Lab! Stefanie Wuschitz • Dorfprojekt Leben in Gemeinschaft, Generationenübergreifendes Wohn-, Arbeits- und Lebensprojekt in Fehring (für voraussichtlich 100-150 Menschen) • Kultur - Vielfalt – Toleranz, Kulturpass und -begleiter, Elke Marksteiner, ARGUMENTO • Mit dem Pegasus Trimaran ins 21. Jahrhundert segeln Christian Alexander • Farmbot++, Earthship im OpenLandLAB? Leopold Zyka • Elektromobilität im Südburgenland Günther Gur • Wir verlegen uns unsere Glasfaser selber – bis zum Bauernhof! ev. Videovorträge via Skype, z.B. Jan Hut, Bürgernetze Gronigen • Rückenwind: Revisionsverband und Genossenschaftsinkubator Ressouren, Potentiale, Talente und Vernetzung • Steiermark Gemeinsam Jetzt – die Zivilgesellschaft kooperativ verknüpfen Hansi Herzog • Transition Austria – den gesellschaftlichen Wandel in Gemeinde und Nachbarschaft in Gang bringen David Steinwender • Was gibt es an unentdeckten Ressourcen am Land? • Welche Ressourcen aus der Stadt könnten hilfreich sein? Zukunftslandkarte Welche Ideen wollen hier und jetzt Wirklichkeit werden? - SCHLUSSFURIOSO
          Episode 28: Loosley Coupled Mashup   

In this episode, Ben and Phil join forces with Loosely Coupled to talk about Open Source, burn out and briefly discuss their favorite open source projects. Jeff was out of action for a lot of it due to unexpected wifi troubles (in San Francisco of all places) so he sadly did not get to take part as much as he would have liked.

Questions this time around:

How do you deal nicely with someone who’s too reliant on you for solving problems and is too quick to ask you rather than figure it out themselves? – TazeTSchnitzel

How do you guys explain OSS to non tech people? My wife finds it strange that I do work for “free” – Chuck Reeves

As a contractor, how do you feel about “OSS” clauses (that your work can/will be open sourced) in contracts? – Davey Shafik

If you aren’t following Jeff and Matt then definitely go and do that:

The video is less edited than the audio, so download and listen for a slightly shorter and more relevant version.

          Episode 20: A nice friendly chat about Sculpin, Guzzle and PSR-7   

Trying out a slightly more professional format with questions, Phil manages to avoid talking over everyone. Winner!

This show has a history of talking about FIG stuff as it is hard to avoid. The group is working on so much cool stuff and prominent figures of the community are involved. We got two more prominent figures, who also happen to be involved with FIG stuff: Beau Simensen lead developer of Sculpin and Michael Dowling lead developer of the wonderful HTTP library Guzzle, who also works at AWS on their PHP SDK.

We discussed each of their projects, some of the plans for the future, specifically whats coming up in Guzzle 4 and how that all ties in with the new PSR-7: HTTP Message, currently in “Draft” status. Conveniently Beau, Michael and Phil are the three FIG members who make up the working group for PSR-7 who will all be working to get this “Accepted”.

See, it all fits!

What are your thoughts on using Bash as a provisioner? Why or why not use it? – Edmund Zynda

Thoughts on the new github Atom editor – Matthew Reschke

Beau & Michael, you’ve both been managing open source packages (OSP) for a few years. What’s the best and worst part of managing an OSP? – Jeremy Lindblom

@Michael do you think a simple HTTP Server interface would fit PSR-7? That would be a good replacement for StackPHP, no? – Marco Pivetta

I’m curious about how this HTTP client relates to the pecl/http extension. There’s been talk in the past of including that extension in the core. – Ben Ramsey

What different circumstances dictate how long a PSR takes to get from proposal to blessed by FIG? – Edmund Zynda

So, plenty for you folks to watch here on the YouTube video!

          Signal Tower   
A stylised signal tower. Originally created for the official shirt for the 2014 Open Source Developers' Conference held at Griffith University.
          Christian Weiske: PEAR will probably be removed from MacOS X   

In a new post to his site Christian Weiske shares his interaction with the Open Source group at Apple concerning his Structures_Graph PEAR package. While they were interested in the package and its functionality but with one issue.

Fact is that Structures_Graph is used in the PEAR installer, which is shipped as part of OSX's PHP packages. Apple simply wanted to continue their current setup without changing anything

Unfortunately, Apple had issues with the package being under the LGPLv3 license. They had a concern that, in certain circumstances, the license could allow the owner access to other potentially sensitive information from the user. He lists out his options - bascially either changing the license, asking Apple for compensation or just tell them "no". Unfortunately, if they decide that having it under that license isn't acecptable, they may drop PEAR all together (as the package is a part of the installer itself).

          Firefox 54.0.1 64-bit   
Mozilla Firefox is a fast, light and tidy open source web browser. At its public launch in 2004 Mozilla Firefox was the first browser to challenge Microsoft Internet Explorer’s dominance. Since then, Mozilla Firefox has consistently featured in the t...
          German database startup ArangoDB closes funding round at €4.2 million   

Open source database startup ArangoDB has closed its latest funding round at €4.2 million. This investment was led by Target Partners and follows the initial investment of €2.2 million last November. The German-based company develops multi-model databases, combining graph, key/value, and JSON documents into one database, which it says will help other startups speed up […]

The post German database startup ArangoDB closes funding round at €4.2 million appeared first on

          C Con React.js Developer Posibilidad de TRABAJO REMOTO zona BELGRANO CABA URGENTE   
Argentina - KaizenRH se encuentra en búsqueda de C Con React.js Developer para trabajar en modernas oficinas de Importante Empresa en Belgrano dedicada... al desarrollo de Software a medida en tecnologías Open Source para empresas corporativas de Estados Unidos. Requisitos C developer Con React.js...
          C# con react.js developer // posibilidad de trabajo remoto   
Belgrano, Buenos Aires - KaizenRH se encuentra en búsqueda de C# Con React.js Developer para trabajar en modernas oficinas de Importante Empresa en Belgrano... dedicada al desarrollo de Software a medida en tecnologías Open Source para empresas corporativas de Estados Unidos. Requisitos C# developer...
          Platform9 Raises $22M to Make Open Source Cloud Infrastructure Tech Easier   
          Google Nexus Smartphones   
Google in collaboration with an Original Equipment Manufacturer (OEM) partner introduced the Google Nexus cell phone. It is based on android operating system and is one of the few devices that are being recommended for Android software development by the Android Open Source Project. Nexus One In the line of Nexus, the first mobile was […]
          Open Source RVA June 30, 2017   
GET IN STEP WITH THE SOURCE! On Friday’s edition of Open Source RVA we talk with author Howard Owen about the alternate Richmond that he’s concocting in his “Willie Black” mystery novels — books like “Oregon Hill,” “The Bottom” and his just-released The Devil’s Triangle.” Owen will appear at a launch for the new work […]
          The Ultimate Data Infrastructure Architect Bundle for $36   
From MongoDB to Apache Flume, This Comprehensive Bundle Will Have You Managing Data Like a Pro In No Time
Expires June 01, 2022 23:59 PST
Buy now and get 94% off

Learning ElasticSearch 5.0


Learn how to use ElasticSearch in combination with the rest of the Elastic Stack to ship, parse, store, and analyze logs! You'll start by getting an understanding of what ElasticSearch is, what it's used for, and why it's important before being introduced to the new features of Elastic Search 5.0.

  • Access 35 lectures & 3 hours of content 24/7
  • Go through each of the fundamental concepts of ElasticSearch such as queries, indices, & aggregation
  • Add more power to your searches using filters, ranges, & more
  • See how ElasticSearch can be used w/ other components like LogStash, Kibana, & Beats
  • Build, test, & run your first LogStash pipeline to analyze Apache web logs


Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels


  • Internet required


Ethan Anthony is a San Francisco based Data Scientist who specializes in distributed data centric technologies. He is also the Founder of XResults, where the vision is to harness the power of data to innovate and deliver intuitive customer facing solutions, largely to non-technical professionals. Ethan has over 10 combined years of experience in cloud based technologies such as Amazon webservices and OpenStack, as well as the data centric technologies of Hadoop, Mahout, Spark and ElasticSearch. He began using ElasticSearch in 2011 and has since delivered solutions based on the Elastic Stack to a broad range of clientele. Ethan has also consulted worldwide, speaks fluent Mandarin Chinese and is insanely curious about human cognition, as related to cognitive dissonance.

Apache Spark 2 for Beginners


Apache Spark is one of the most widely-used large-scale data processing engines and runs at extremely high speeds. It's a framework that has tools that are equally useful for app developers and data scientists. This book starts with the fundamentals of Spark 2 and covers the core data processing framework and API, installation, and application development setup.

  • Access 45 lectures & 5.5 hours of content 24/7
  • Learn the Spark programming model through real-world examples
  • Explore Spark SQL programming w/ DataFrames
  • Cover the charting & plotting features of Python in conjunction w/ Spark data processing
  • Discuss Spark's stream processing, machine learning, & graph processing libraries
  • Develop a real-world Spark application


Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels


  • Internet required


Rajanarayanan Thottuvaikkatumana, Raj, is a seasoned technologist with more than 23 years of software development experience at various multinational companies. He has lived and worked in India, Singapore, and the USA, and is presently based out of the UK. His experience includes architecting, designing, and developing software applications. He has worked on various technologies including major databases, application development platforms, web technologies, and big data technologies. Since 2000, he has been working mainly in Java related technologies, and does heavy-duty server-side programming in Java and Scala. He has worked on very highly concurrent, highly distributed, and high transaction volume systems. Currently he is building a next generation Hadoop YARN-based data processing platform and an application suite built with Spark using Scala.

Raj holds one master's degree in Mathematics, one master's degree in Computer Information Systems and has many certifications in ITIL and cloud computing to his credit. Raj is the author of Cassandra Design Patterns - Second Edition, published by Packt.

When not working on the assignments his day job demands, Raj is an avid listener to classical music and watches a lot of tennis.

Designing AWS Environments


Amazon Web Services (AWS) provides trusted, cloud-based solutions to help businesses meet all of their needs. Running solutions in the AWS Cloud can help you (or your company) get applications up and running faster while providing the security needed to meet your compliance requirements. This course leaves no stone unturned in getting you up to speed with administering AWS.

  • Access 19 lectures & 2 hours of content 24/7
  • Familiarize yourself w/ the key capabilities to architect & host apps, websites, & services on AWS
  • Explore the available options for virtual instances & demonstrate launching & connecting to them
  • Design & deploy networking & hosting solutions for large deployments
  • Focus on security & important elements of scalability & high availability


Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels


  • Internet required


Wayde Gilchrist started moving customers of his IT consulting business into the cloud and away from traditional hosting environments in 2010. In addition to consulting, he delivers AWS training for Fortune 500 companies, government agencies, and international consulting firms. When he is not out visiting customers, he is delivering training virtually from his home in Florida.

Learning MongoDB


Businesses today have access to more data than ever before, and a key challenge is ensuring that data can be easily accessed and used efficiently. MongoDB makes it possible to store and process large sets of data in a ways that drive up business value. Learning MongoDB will give you the flexibility of unstructured storage, combined with robust querying and post processing functionality, making you an asset to enterprise Big Data needs.

  • Access 64 lectures & 40 hours of content 24/7
  • Master data management, queries, post processing, & essential enterprise redundancy requirements
  • Explore advanced data analysis using both MapReduce & the MongoDB aggregation framework
  • Delve into SSL security & programmatic access using various languages
  • Learn about MongoDB's built-in redundancy & scale features, replica sets, & sharding


Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels


  • Internet required


Daniel Watrous is a 15-year veteran of designing web-enabled software. His focus on data store technologies spans relational databases, caching systems, and contemporary NoSQL stores. For the last six years, he has designed and deployed enterprise-scale MongoDB solutions in semiconductor manufacturing and information technology companies. He holds a degree in electrical engineering from the University of Utah, focusing on semiconductor physics and optoelectronics. He also completed an MBA from the Northwest Nazarene University. In his current position as senior cloud architect with Hewlett Packard, he focuses on highly scalable cloud-native software systems.

Learning Hadoop 2


Hadoop emerged in response to the proliferation of masses and masses of data collected by organizations, offering a strong solution to store, process, and analyze what has commonly become known as Big Data. It comprises a comprehensive stack of components designed to enable these tasks on a distributed scale, across multiple servers and thousand of machines. In this course, you'll learn Hadoop 2, introducing yourself to the powerful system synonymous with Big Data.

  • Access 19 lectures & 1.5 hours of content 24/7
  • Get an overview of the Hadoop component ecosystem, including HDFS, Sqoop, Flume, YARN, MapReduce, Pig, & Hive
  • Install & configure a Hadoop environment
  • Explore Hue, the graphical user interface of Hadoop
  • Discover HDFS to import & export data, both manually & automatically
  • Run computations using MapReduce & get to grips working w/ Hadoop's scripting language, Pig
  • Siphon data from HDFS into Hive & demonstrate how it can be used to structure & query data sets


Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels


  • Internet required


Randal Scott King is the Managing Partner of Brilliant Data, a consulting firm specialized in data analytics. In his 16 years of consulting, Scott has amassed an impressive list of clientele from mid-market leaders to Fortune 500 household names. Scott lives just outside Atlanta, GA, with his children.

ElasticSearch 5.x Cookbook eBook


ElasticSearch is a Lucene-based distributed search server that allows users to index and search unstructured content with petabytes of data. Through this ebook, you'll be guided through comprehensive recipes covering what's new in ElasticSearch 5.x as you create complex queries and analytics. By the end, you'll have an in-depth knowledge of how to implement the ElasticSearch architecture and be able to manage data efficiently and effectively.

  • Access 696 pages of content 24/7
  • Perform index mapping, aggregation, & scripting
  • Explore the modules of Cluster & Node monitoring
  • Understand how to install Kibana to monitor a cluster & extend Kibana for plugins
  • Integrate your Java, Scala, Python, & Big Data apps w/ ElasticSearch


Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels


  • Internet required


Alberto Paro is an engineer, project manager, and software developer. He currently works as freelance trainer/consultant on big data technologies and NoSQL solutions. He loves to study emerging solutions and applications mainly related to big data processing, NoSQL, natural language processing, and neural networks. He began programming in BASIC on a Sinclair Spectrum when he was eight years old, and to date, has collected a lot of experience using different operating systems, applications, and programming languages.

In 2000, he graduated in computer science engineering from Politecnico di Milano with a thesis on designing multiuser and multidevice web applications. He assisted professors at the university for about a year. He then came in contact with The Net Planet Company and loved their innovative ideas; he started working on knowledge management solutions and advanced data mining products. In summer 2014, his company was acquired by a big data technologies company, where he worked until the end of 2015 mainly using Scala and Python on state-of-the-art big data software (Spark, Akka, Cassandra, and YARN). In 2013, he started freelancing as a consultant for big data, machine learning, Elasticsearch and other NoSQL products. He has created or helped to develop big data solutions for business intelligence, financial, and banking companies all over the world. A lot of his time is spent teaching how to efficiently use big data solutions (mainly Apache Spark), NoSql datastores (Elasticsearch, HBase, and Accumulo) and related technologies (Scala, Akka, and Playframework). He is often called to present at big data or Scala events. He is an evangelist on Scala and Scala.js (the transcompiler from Scala to JavaScript).

In his spare time, when he is not playing with his children, he likes to work on open source projects. When he was in high school, he started contributing to projects related to the GNOME environment (gtkmm). One of his preferred programming languages is Python, and he wrote one of the first NoSQL backends on Django for MongoDB (Django-MongoDBengine). In 2010, he began using Elasticsearch to provide search capabilities to some Django e-commerce sites and developed PyES (a Pythonic client for Elasticsearch), as well as the initial part of the Elasticsearch MongoDB river. He is the author of Elasticsearch Cookbook as well as a technical reviewer of Elasticsearch Server-Second Edition, Learning Scala Web Development, and the video course, Building a Search Server with Elasticsearch, all of which are published by Packt Publishing.

Fast Data Processing with Spark 2 eBook


Compared to Hadoop, Spark is a significantly more simple way to process Big Data at speed. It is increasing in popularity with data analysts and engineers everywhere, and in this course you'll learn how to use Spark with minimum fuss. Starting with the fundamentals, this ebook will help you take your Big Data analytical skills to the next level.

  • Access 274 pages of content 24/7
  • Get to grips w/ some simple APIs before investigating machine learning & graph processing
  • Learn how to use the Spark shell
  • Load data & build & run your own Spark applications
  • Discover how to manipulate RDD
  • Understand useful machine learning algorithms w/ the help of Spark MLlib & R


Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels


  • Internet required


Krishna Sankar is a Senior Specialist—AI Data Scientist with Volvo Cars focusing on Autonomous Vehicles. His earlier stints include Chief Data Scientist at, Principal Architect/Data Scientist at Tata America Intl. Corp., Director of Data Science at a bioinformatics startup, and as a Distinguished Engineer at Cisco. He has been speaking at various conferences including ML tutorials at Strata SJC and London 2016, Spark Summit, Strata-Spark Camp, OSCON, PyCon, and PyData, writes about Robots Rules of Order, Big Data Analytics—Best of the Worst, predicting NFL, Spark, Data Science, Machine Learning, Social Media Analysis as well as has been a guest lecturer at the Naval Postgraduate School. His occasional blogs can be found at His other passion is flying drones (working towards Drone Pilot License (FAA UAS Pilot) and Lego Robotics—you will find him at the St.Louis FLL World Competition as Robots Design Judge.

MongoDB Cookbook: Second Edition eBook


MongoDB is a high-performance, feature-rich, NoSQL database that forms the backbone of the systems that power many organizations. Packed with easy-to-use features that have become essential for a variety of software professionals, MongoDB is a vital technology to learn for any aspiring data scientist or systems engineer. This cookbook contains many solutions to the everyday challenges of MongoDB, as well as guidance on effective techniques to extend your skills and capabilities.

  • Access 274 pages of content 24/7
  • Initialize the server in three different modes w/ various configurations
  • Get introduced to programming language drivers in Java & Python
  • Learn advanced query operations, monitoring, & backup using MMS
  • Find recipes on cloud deployment, including how to work w/ Docker containers along MongoDB


Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels


  • Internet required


Amol Nayak is a MongoDB certified developer and has been working as a developer for over 8 years. He is currently employed with a leading financial data provider, working on cutting-edge technologies. He has used MongoDB as a database for various systems at his current and previous workplaces to support enormous data volumes. He is an open source enthusiast and supports it by contributing to open source frameworks and promoting them. He has made contributions to the Spring Integration project, and his contributions are the adapters for JPA, XQuery, MongoDB, Push notifications to mobile devices, and Amazon Web Services (AWS). He has also made some contributions to the Spring Data MongoDB project. Apart from technology, he is passionate about motor sports and is a race official at Buddh International Circuit, India, for various motor sports events. Earlier, he was the author of Instant MongoDB, Packt Publishing.

Cyrus Dasadia always liked tinkering with open source projects since 1996. He has been working as a Linux system administrator and part-time programmer for over a decade. He works at InMobi, where he loves designing tools and platforms. His love for MongoDB started in 2013, when he was amazed by its ease of use and stability. Since then, almost all of his projects are written with MongoDB as the primary backend. Cyrus is also the creator of an open source alert management system called CitoEngine. He likes spending his spare time trying to reverse engineer software, playing computer games, or increasing his silliness quotient by watching reruns of Monty Python.

Learning Apache Kafka: Second Edition eBook


Apache Kafka is simple describe at a high level bust has an immense amount of technical detail when you dig deeper. This step-by-step, practical guide will help you take advantage of the power of Kafka to handle hundreds of megabytes of messages per second from multiple clients.

  • Access 120 pages of content 24/7
  • Set up Kafka clusters
  • Understand basic blocks like producer, broker, & consumer blocks
  • Explore additional settings & configuration changes to achieve more complex goals
  • Learn how Kafka is designed internally & what configurations make it most effective
  • Discover how Kafka works w/ other tools like Hadoop, Storm, & more


Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels


  • Internet required


Nishant Garg has over 14 years of software architecture and development experience in various technologies, such as Java Enterprise Edition, SOA, Spring, Hadoop, Hive, Flume, Sqoop, Oozie, Spark, Shark, YARN, Impala, Kafka, Storm, Solr/Lucene, NoSQL databases (such as HBase, Cassandra, and MongoDB), and MPP databases (such as GreenPlum).

He received his MS in software systems from the Birla Institute of Technology and Science, Pilani, India, and is currently working as a technical architect for the Big Data R&D Group with Impetus Infotech Pvt. Ltd. Previously, Nishant has enjoyed working with some of the most recognizable names in IT services and financial industries, employing full software life cycle methodologies such as Agile and SCRUM.

Nishant has also undertaken many speaking engagements on big data technologies and is also the author of HBase Essestials, Packt Publishing.

Apache Flume: Distributed Log Collection for Hadoop: Second Edition eBook


Apache Flume is a distributed, reliable, and available service used to efficiently collect, aggregate, and move large amounts of log data. It's used to stream logs from application servers to HDFS for ad hoc analysis. This ebook start with an architectural overview of Flume and its logical components, and pulls everything together into a real-world, end-to-end use case encompassing simple and advanced features.

  • Access 178 pages of content 24/7
  • Explore channels, sinks, & sink processors
  • Learn about sources & channels
  • Construct a series of Flume agents to dynamically transport your stream data & logs from your systems into Hadoop


Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels


  • Internet required


Steve Hoffman has 32 years of experience in software development, ranging from embedded software development to the design and implementation of large-scale, service-oriented, object-oriented systems. For the last 5 years, he has focused on infrastructure as code, including automated Hadoop and HBase implementations and data ingestion using Apache Flume. Steve holds a BS in computer engineering from the University of Illinois at Urbana-Champaign and an MS in computer science from DePaul University. He is currently a senior principal engineer at Orbitz Worldwide (

          Learn Express for $15   
Master Express, The Fast & Lightweight Node Framework for Building Back-End Servers
Expires June 05, 2018 23:59 PST
Buy now and get 40% off


Express is an extremely powerful tool for creating web applications built on Node. Over this action-packed series of tutorials you'll learn how to use and implement the Express library. Express is extremely practical, whether you're looking to use it for prototyping, hobbies, or to improve your qualifications, and this course will help you do it all.

  • Access 14 lectures & 1.5 hours of content 24/7
  • Install Express on your computer, run a custom server, & understand requests & responses
  • Use GET & POST requests
  • Write & run tests w/ Mocha & learn about RESTful APIs
  • Test your knowledge w/ quizzes


Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels


  • Internet required


Known in development circles as “the Code Whisperer," Daniel Stern has been believed to possess a supernatural connection to computers ever since he talked the supercomputer Deep Blue off the roof of a twelve-story St. Petersburg apartment building, following its shameful loss to Gary Kasparov.

He can often be found singing softly to his tablet, or gently caressing his aluminum keyboard in his arms.

Daniel has been working as a front end and full stack developer in the tech industry since 2011. He's developed single-page applications for banks like CIBC, charities like the Ontario Institute for Cancer Research, and at ad agencies like McLaren McCann, TraffikGroup and Olson. Throughout his labors, he's worked on computer programming in his spare time because, well, he's obsessed with it.

In addition to being featured in both CSS Weekly and JavaScript Weekly, Daniel is well-known throughout the open-source community for maintaining several open-source tools, most notably the Angular.js and LESS-based tool, Range .css and the Angular .js audio tool, ngAudio.

In addition to being trusted by the open source community to develop top-quality, functional code, Daniel has also been invited to speak at numerous conferences including Full Stack Conference 2014 in London, England.

          Podcast448: Artificial Reality, Free Online Learning Channels & STEAM Studio   
Welcome to the November 5, 2016, podcast episode of "Moving at the Speed of Creativity" with Wesley Fryer, which explores topics relating to artificial reality, free online learning channels and a STEAM Studio reflection. Wes discusses Steven Levy's recent article for Backchannel, "The Google Assistant Needs You," and our current "transition era" as artificial intelligence (AI) technologies mature and become normalized in our lives. He also discusses Elon Musk's recent announcement about camouflaged solar roof panels, and a recent video interview with Musk in which he discussed his reasons for starting the OpenAI (@openai) initiative. Musk's concerns about mature AI technologies are not limited to a RoboCop-style malicious AI future, but also include the danger of AI technologies being tightly controlled by a small number of entities. To guard against the dangers latent in that future scenario, Musk wants more groups, individuals and nations to have access to powerful AI algorithms and capabilities through the open source movement. In part two of the podcast, Wes discussed some of his favorite podcast channels and websites which provide fantastic opportunities for free, online learning. This begins with the K12 Online Conference (@k12online) which launched its 2016-17 mini-conference series on October 21st with a 3 part keynote and live panel discussion on YouTube Live with Julie Lindsay (@julielindsay). This first strand of the conference this year focuses on global collaboration. Strand two will focus on "Learning Spaces" and starts November 14th with a keynote by David Jakes (@djakes). Favorite tech podcasts mentioned by Wes in this episode include Clockwise by RelayFM (@clockwisepod), The Committed (@CommittedShow), and Note to Self (@notetoself). Wes also mentioned his weekly podcast and live webshow (on most Wednesday nights) The EdTech Situation Room (@edtechSR). The third part of this podcast features a recorded reflection by elementary art teacher Megan Thompson (@seeingnewshapes) and Wes discussing the "STEAM Studio" after-school enrichment class the co-taught this past semester together. They discuss things that went well, things they would change, and success stories from this STEAM (Science, Technology, Engineering, Art and Math) collaboration. If you listen to and enjoy this episode, please reach out to Wes with a comment or via a Twitter reply to @wfryer. Thanks for listening to "Moving at the Speed of Creativity!"
          Podcast397: Takeaways from and Reflections on the 2012 EDUCAUSE Conference   
This is a “podcast from the road” by Wesley Fryer reflecting on the 2012 EDUCAUSE conference. In the podcast Wesley discusses learning management systems including the free/beta LMS OpenClass from Pearson, Kuali open source solutions for higher education, badge-based learning initiatives and tools, open journals / open academic publishing, Wolfram Alpha, MOOCs and native mobile apps vs mobile websites. Check out the podcast shownotes for referenced links / resources.
          FAR - "Find And Replace" Tool

I was looking for a free "Find and Replace" tool that could do a bit more than the usual "find and replace" options in the usual simple text editors ( I needed to add some identical text with some special characters and an additional line break after every single paragraph of a longer text which the simple editors couldn't do).

Came across this really nice open source tool. Requires no installation, but requires Java. It can do several files at once and quite some other sophisticated stuff.

It has a very useful "Undo" option that can undo the entire batch operation in one go which simple editors can't do. To achieve some more sophisticated replacements you HAVE to read the help file first and learn a bit about the "regular expressions". Took me about half an hour until I achieved what I wanted to do. You can also save your search or replacement patterns (useful if it's a more complicated pattern that you can't quickly make up again). And it has a nice Preview, so you can see in advance whether the result will be satisfactory.

I have to add, after I learned about the "regular expressions", I managed to achieve the same result in my regular text editor (Metapad). But I could never have worked it out without this tool (and its helpfile). Also, if you make one little mistake in your text editor and mess up the operation (if it's a more complicated replacement you will inevitably start out with some unwanted results), you can't undo it in one go as with this tool.
          Podcast334: One to One Learning with Open Source Netbooks is Practical, Affordable and Powerful - Learn Why   
One to one learning with wireless, digital devices in the hands of every learner in the classroom is the future. With netbooks running over 100 free educational applications on Ubuntu Linux, that dream can be a reality in your classroom and school district today, not tomorrow. As I explain in the introduction to this podcast featuring two interviews, I have lost NONE of my enthusiasum for Apple and Macintosh computers, but I think it would be foolish to ignore the powerful and affordable computing and learning opportunities now offered by netbooks as well as open source software. After sharing a plug for the upcoming FREE K-12 Online Conference in December and an introduction to these interviews, this podcast includes an interview with Warren Luebkeman. Warren is a co-founder of the Open 1:1 Nonprofit organization, which is based in Maine and provides a FREE Ubuntu image for netbooks loaded with over 100 educational and productivity applications. That recording was made at the ACTEM 2009 conference in Augusta, Maine in October. The second interview is with Alex Inman, who has been implementing and supporting 1:1 initiatives for over 8 years in Milwaukee and St Louis. Alex shared a presentation at the One to One Institute's November 2009 conference called "Saving Money on Your One-to-One Program." In this interview Alex specifically addresses the viability and power of Ubuntu as a platform on netbook computers for student learning. He discusses powerful open source solutions like iTalc (for desktop monitoring) and iFolder (for cross-platform remote file sharing.) Additionally, he addresses the importance of support for "cultural change" in schools for 1:1 laptop learning initiatives. That buy-in from top leadership all the way down the classroom is even more important for laptop initiative success than the platform / hardware.
          The Yamchurian Candidate   
At no time were any intelligence sources or methods discussed, and no military operations were disclosed that were not already known publicly.” - H.R. McMaster, National Security Advisor
"As President I wanted to share with Russia (at an openly scheduled W.H. meeting) which I have the absolute right to do, facts pertaining to terrorism and airline flight safety. Humanitarian reasons, plus I want Russia to greatly step up their fight against ISIS & terrorism." - Donald Trump today on Twitter
I’m sure Kislyak was able to fire off a good cable back to the Kremlin with all the details.” -A former US official
(By American Zen's Mike Flannigan, on loan from Ari Goldstein)
Thomas P. Bossert, the assistant to the President for homeland security and counterterrorism, had to do some quick thinking and make some fast phone calls to Langley, VA and Fort Meade, MD. We don't know the exact contents of those calls but three facts are clear- Donald Trump had just met with high level Russian officials in the Oval Office, had recklessly declassified previously code-word classified intelligence and gave it to them on a silver platter.
     Such phone calls are hastily made in the interests of damage control by stern, colorless, pragmatic bureaucrats who help every government on earth run on a more or less even keel. One can only imagine the looks on the faces of the people at the other end of those phone calls at CIA and NSA headquarters. And, in the short reign of Trump, one can easily imagine such fires frantically being tamped out on a regular basis.
     The horrible optics alone should have called for a Congressional probe. Trump's meeting with the Russian Foreign Minister and their Ambassador to the US came the day after James Comey was fired by Trump. The US news services were kicked out and supplanted by TASS, the official news apparatus of the Russian government. Then in a TV interview, Trump essentially admitted he fired Comey because of the Russia investigation he was heading up (even though his right wing supporters don't believe him).
     H.R. McMaster, Trump's National Security Advisor, was trotted out yesterday and again today to essentially fall on his own sword and it was difficult to hear him from under the bus wheels that were rolling over his chest. McMaster's story today was in stark contrast with his earlier press conference. Today he acknowledged that Trump did share classified intel with the Russians. But earlier, he'd claimed, "At no time were any intelligence sources or methods discussed, and no military operations were disclosed that were not already known publicly."
     Yet today, he'd also said that what Trump shared was "open source reporting." Which explains why the American media weren't allowed to attend the meeting with the Russians but Tass was.

A Child's Crusade
Don't get me wrong or think I'm going soft. My colleague David Brooks has been been a blithering, pseudo-intellectual idiot since Tucker Carlson was in short pants. But in today's op-ed, Mr, Pink Tie nails Trump and puts his usually shaky finger on the pulse of what's truly wrong with this so-called administration. These are just some of the highlights of Brooks' piece:
At base, Trump is an infantalist... Immaturity is becoming the dominant note of his presidency, lack of self-control his leitmotif... First, most adults have learned to sit still. But mentally, Trump is still a 7-year-old boy who is bouncing around the classroom... Trump seems to need perpetual outside approval to stabilize his sense of self, so he is perpetually desperate for approval, telling heroic fabulist tales about himself... Which brings us to the reports that Trump betrayed an intelligence source and leaked secrets to his Russian visitors. From all we know so far, Trump didn’t do it because he is a Russian agent, or for any malevolent intent. He did it because he is sloppy, because he lacks all impulse control, and above all because he is a 7-year-old boy desperate for the approval of those he admires.
     You get the message. Most interestingly, Brooks talks about the Dunning-Kruger Effect, which is a familiar term to many of us. For those to whom it isn't, it's when a person's incompetence is such that they're not even aware of their incompetence. It goes back to the late John Updike's gentle jeremiad about success, in which he'd said the successful are often fooled into believing that "You get the idea that anything that you do is in some way marvelous." Brooks concludes with this lyrical observation that nonetheless has nightmarish implications, "We’ve got this perverse situation in which the vast analytic powers of the entire world are being spent trying to understand a guy whose thoughts are often just six fireflies beeping randomly in a jar."
     And this gets to the crux of this reckless and dangerous disclosure to the Russians: That Trump isn't the usual case of a man suddenly drunk on his seemingly limitless power: He's a child or at best a teenager getting his first taste of some alcoholic ambrosia or what he believes was imparted straight to him by the gods. He's a man who can't keep a secret, a Quixotic quidnunc tilting at windmills that don't exist (fake voters, fake news, fake this and that) who simply can't keep a secret.
     And it may very well be that Trump honestly believes that giving code-word classified intelligence to the Russians about an ISIS plot and potentially exposing a very sensitive source of intelligence was an act of humanity. But if this is how Trump and his Russian buddies think this the way to go about it, especially as we ostensibly have opposing interests in Syria, then it's a foolish child's crusade.
     Among the many lessons Trump needs to learn in statecraft is that not exercising power is the true test of power, that just because you can wield power (such as declassifying classified information) doesn't mean that you should. Trump will never learn that. Because, after nearly 71 years on this planet, if Trump hasn't learned that simple lesson, he never will.

          Angular 2 From The Ground Up for $23   
Expires October 03, 2021 23:59 PST
Buy now and get 74% off


Learning Angular 2, the new version of the JavaScript framework created by Google, is easy with this immersive 9 hour course. You'll cover all of the fundamentals of Angular 2 and gain the skills to separate yourself from other web developers. The best part? This course is still adding content that you'll have access to down the line.

  • Access 82 lectures & 10.5 hours of content 24/7
  • Understand the basic Angular 2 concepts, like Components, Form Validation, Templates, Services, Dependency Injection & more
  • Choose the best language for you, between JavaScript, new JavaScript, or TypeScript
  • Make HTTP requests & integrate w/ backend
  • Set up a production-ready build workflow using NPM & Webpack
  • Write unit tests w/ Jasmine & run them w/ Karma
  • Navigate w/ the Angular Router
  • Receive downloadable code samples for the Angular 2.0.0 final release


Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels


  • Internet required


Mirko has over 15 years of experience in Software Development and has worked for many different companies, from startups to large, high-profile organisations such as the BBC, the Expedia group, and The Financial Times.

Ever since getting his hands on a Commodore 64 as a kid he nourished a passion for computers that led him to start using Linux in 1997, publishing open source projects in 2003, and practising Test-Driven Development (TDD) and Extreme Programming (XP) in 2005.

He has written code in more than a dozen different programming languages, and is familiar with all the facets of application development, from backend services to web front-ends and mobile apps. He also holds a Postgraduate Diploma in Software Development from the Open University.

He is always keen to learn new technologies and enjoys teaching online because it gives him the opportunity to share his experience with thousands of other developers.

He is currently based in London, U.K., where he runs his consultancy company, Encoded Knowledge Ltd.

          Pop!_OS es una nueva distro creada por System76, el conocido fabricante de ordenadores con Linux   

Galp2 With Pop 1280 32f8f6d574

System76 es una empresa fabricante de ordenadores de mesa y portátiles que tiene algunos años en el mercado. Basados en Denver, Colorado en los Estados Unidos, llevan más de una década vendiendo máquinas con Linux preinstalado.

Sus primeros ordenadores tenían Ubuntu 5.10, y en la actualidad continuan usando Ubuntu como sistema operativo predilecto para sus productos. Pues eso está a punto de cambiar, ligeramente. System76 está trabajando en su propia distribución llamada: Pop!_OS.

Main Pop!_OS

Otra distro más basada en Ubuntu... pero

Pop!_OS es una nueva distribución creada por la empresa y estará basada en... Ubuntu. A simple vista parece un simple fork de Ubuntu 17.04 con GNOME que tiene su propio tema GTK e iconos. Que por cierto, ambos son bastante bonitos y llamativos.

Sin embargo, la empresa ha hecho hincapié en un detalle interesante sobre el público al que está orientado su sistema:

No estamos construyendo una alternativa a Windows y Mac, o un sistema operativo para usuarios mainstream. Estamos desarrollando Pop!_OS para aquellos trabajos en los que Linux sobresale y esto coincide con la base de clientes de System76.

Pop Os 1 1024x576

Esta nueva distro está diseñada especialmente para aquellos que usan su computador para crear. Ya sea productos profesionales de software, modelos 3D, ciencias de la computación, ingeniería, inteligencia artificial, internet de las cosas, y otras tecnologías de punta.

Es otro tipo de público, no es el mismo tipo de creadores que los que busca complacer la Creators Update de Windows 10 por ejemplo. Aparentemente el mercado de clientes de la empresa que obviamente usa Linux, lo usa para cosas bastante específicas.

Si eres fabricante de hardware y tienes mayor control sobre el software con el que despachas tus productos, esto solo puede ser una buena noticia para ti y tus clientes. Quienes tengan un equipo de esa marca recibirán en teoría un sistema optimizado para el hardware que compró y para sus necesidades.

Quizás System76 se cansó de esperar por Canonical y del desarrollo lento y accidentado de Ubuntu en los últimos años, especialmente con la muerte de la convergencia, el adiós a Unity y el regreso a GNOME.

¿Y para el resto de la comunidad?

Pop Os Desktop

Es otro clásico caso de lo bueno y lo malo de Linux. Por un lado quizás algunos digan "uff qué fastidio, otra distro basada en Ubuntu". Y, por otro, está la característica escencial del open source, y es que cualquiera puede hacer lo que quiera con él.

Este Pop!_OS puede que esté optimizado para equipos de System76, pero obviamente es un proyecto abierto, que cualquiera puede instalar en su ordenador y probar a ver que le parece y si resulta ser el nuevo amor de su vida.

De aquí al lanzamiento estable en octubre no queda mucho tiempo, pero mientras tanto puedes seguir el desarrollo en GitHub o descargar el ISO de la versión alfa de forma directa (servidores super lentos), o a través de torrent.

Si no te interesa toda la distro como tal, pero te parece que tiene una apariencia interesante y bonita. El tema GTK y los iconos están disponibles para instalar en otras distribuciones a través del repositorio oficial.

En Genbeta | Cómo poner a punto tu ordenador con Linux

También te recomendamos

Las distribuciones Linux más interesantes de 2016

Ya puedes probar Ubuntu 17.10, la primera versión que volverá a GNOME luego de muchos años

Ni Twitter ni Facebook ni Google, ¿cómo se lo montan los chinos en Internet?

La noticia Pop!_OS es una nueva distro creada por System76, el conocido fabricante de ordenadores con Linux fue publicada originalmente en Genbeta por Gabriela González .

          Caprine, un elegante cliente de Facebook Messenger para el escritorio de Windows, Linux y Mac   

Caprine Facebook Messenger

Facebook ya tiene 2.000 millones de usuarios y su servicio de mensajería integrado con la red social, Messenger, alcanzó los 1.200 millones de usuarios activos al mes hace poco. Con esos números tan estratosfericos, es probable que tú mismo uses el servicio o conozcas a alguien que lo use.

Sin embargo, fuera de los dispositivos móviles, solo Windows 10 cuenta con una aplicación nativa oficial tanto de Facebook como de Messenger, y son exactamente iguales a las versiones web. Si te interesa probar una alternativa con algunas funciones adicionales, que además está disponible en macOS y Linux, podrías probar con Caprine.

Screenshot Dark

Caprine es un cliente no oficial de Facebook Messenger para el escritorio. Es compatible con Windows 7, 8/8.1 y Windows 10, con macOS 10.9 y posterior y con varias distribuciones Linux.

Entre sus funciones más interesantes destacan dos temas de apariencia adicionales, uno oscuro, y uno llamado "Vibrant" como se ve en la imagen de portada de este artículo. Se parece al estilo material acrílico del nuevo Fluent Design de Windows 10.


Puedes cambiar el tema fácilmente presionando el atajo de teclado CTRL + D. También existe un modo compacto que se adapta cuando la ventana se pone muy pequeña. Además de que cuentas con varios atajos de teclado adicionales para buscar, borrar, y archivar conversaciones, cambiar de chat, insertar GIFs, y más.

Otra cosa interesante es que si usas Caprice en lugar del Messenger de la web, los enlaces que pinches dentro del mensajero no serán rastreados por Facebook.

Caprine es open source y puedes ver el código en su página de GitHub. Eso sí, no esperes que consuma poca RAM, pues está basado en Electron.

En Genbeta | Ring es un clon de Skype open source y multiplataforma enfocado en la privacidad

También te recomendamos

Headset es un genial minireproductor que convierte YouTube en una app de música para el escritorio

Prepárate, tus chats de Facebook Messenger probablemente se llenarán de más y más anuncios

Ni Twitter ni Facebook ni Google, ¿cómo se lo montan los chinos en Internet?

La noticia Caprine, un elegante cliente de Facebook Messenger para el escritorio de Windows, Linux y Mac fue publicada originalmente en Genbeta por Gabriela González .

          More RSS reading   
Microsoft missing an RSS strategy, Houston Chronicle says.

Dwight Silverman, in the Houston Chronicle, writes: Microsoft MIA on RSS.

Ouch, but, yes, it's frustrating how long it takes to get new features added to our products.

That said, let's meetup again at the PDC in September and see if you still think we're missing in action.

But, there's another way to look at it. We've built a platform that lets developers add value. There are a TON of RSS news aggregators on Windows. Look at Onfolio 2.0, for instance. That works with Firefox and IE and is an awesome aggregator. Or, look at FeedDemon. That's a standalone application, developed in Borland's Delphi, that rocks too. RSS Bandit was developed, on .NET, by a Microsoft employee during his nights and weekends and it has a huge community around it (it's free too and now is being run as an open source project, so it's getting lots of new features added very quickly). Then you look at NewsGator (and their competitors IntraVnews and YouSoftware). Those plug into Outlook (I use NewsGator as my primary RSS News Aggregator).

So, Microsoft's platforms get credit for these innovative -- and quite different from each other -- approaches to RSS.

We need to remember that anything Microsoft does will affect the livelihoods of the developers who built these products (and took the business risk back when RSS didn't look important). They validated Microsoft's investment in development tools and platforms and for that I'm very grateful.

[Scobleizer: Microsoft Geek Blogger]
          LinkedIn ayuda a las empresas cuando fallan sus aplicaciones o página web   
La compañía ofrece en versión open source dos herramientas que permiten a las empresas enfrentarse a caídas en sus apps o sites online.
          Materi Kelas 7 :Program Aplikasi (A. Macam-Macam Perangkat Lunak Program Aplikasi)   

A. Macam-Macam Perangkat Lunak Program Aplikasi

Perangkat Lunak (software) adalah kumpulan beberapa perintah yang dieksekusi oleh mesin komputer dalam menjalankan pekerjaannya. perangkat lunak ini merupakan catatan bagi mesin komputer untuk menyimpan perintah, maupun dokumen serta arsip lainnya.

Perangkat Lunak (software) merupakan data elektronik yang disimpan sedemikian rupa oleh komputer itu sendiri, data yang disimpan ini dapat berupa program atau instruksi yang akan dijalankan oleh perintah, maupun catatan-catatan  yang diperlukan oleh  komputer untuk menjalankan perintah yang  dijalankannya. Untuk mencapai keinginannya tersebut dirancanglah suatu susunan logika, logika yang disusun ini diolah melalui perangkat lunak, yang disebut juga dengan program beserta data-data yang diolahnya. Pengeloahan pada software ini melibatkan beberapa hal, diantaranya adalah sistem operasi, program, dan data. Software ini mengatur sedemikian rupa sehingga logika yang ada dapat dimengerti oleh mesin komputer.

Secara umum, perangkat lunak (software) dapat dibagi menjadi tiga bagian, yaitu Sistem Operasi, Bahasa Pemrograman dan Perangkat Lunak Aplikasi.


a. Pengertian Sistem Operasi
Sistem operasi merupakan sebuah penghubung antara
pengguna dari komputer dengan perangkat keras komputer. Sebelum ada sistem operasi, orang hanya mengunakan komputer dengan menggunakan sinyal analog dan sinyal digital. Seiring dengan berkembangnya pengetahuan dan teknologi, pada saat ini terdapat berbagai sistem operasi dengan keunggulan masing- masing. Untuk lebih memahami sistem operasi maka sebaiknya perlu diketahui terlebih dahulu beberapa konsep dasar mengenai sistem operasi itu sendiri.
Pengertian sistem operasi secara umum ialah pengelola seluruh sumber-daya yang terdapat pada sistem komputer dan menyediakan sekumpulan layanan (system calls) ke pemakai sehingga memudahkan dan menyamankan penggunaan serta pemanfaatan sumber-daya sistem komputer.
Sistem operasi berfungsi ibarat pemerintah dalam suatu
negara, dalam arti membuat kondisi komputer agar dapat menjalankan program secara benar. Untuk menghindari konflik

yang terjadi pada saat pengguna menggunakan sumber-daya yang sama, sistem operasi mengatur pengguna mana yang dapat mengakses suatu sumber-daya. Sistem operasi juga sering disebut resource allocator. Satu lagi fungsi penting sistem operasi ialah sebagai program pengendali yang bertujuan untuk menghindari kekeliruan (error) dan penggunaan komputer yang tidak perlu.

b. Sejarah Sistem Operasi
Menurut Tanenbaum, sistem operasi mengalami
perkembangan yang sangat pesat, yang dapat dibagi kedalam empat generasi:
Generasi Pertama (1945-1955)
Generasi pertama merupakan awal perkembangan sistem komputasi elektronik sebagai pengganti sistem komputasi mekanik, hal itu disebabkan kecepatan manusia untuk menghitung terbatas dan manusia sangat mudah untuk membuat kecerobohan, kekeliruan bahkan kesalahan. Pada generasi ini belum ada sistem operasi, maka sistem komputer diberi instruksi yang harus dikerjakan secara langsung.
Generasi Kedua (1955-1965)
Generasi kedua memperkenalkan Batch Processing System, yaitu Job yang dikerjakan dalam satu rangkaian, lalu dieksekusi secara berurutan.Pada generasi ini sistem komputer belum dilengkapi sistem operasi, tetapi beberapa fungsi sistem operasi telah ada, contohnya fungsi sistem operasi ialah FMS dan IBSYS.
Generasi Ketiga (1965-1980)
Pada generasi ini perkembangan sistem operasi dikembangkan untuk melayani banyak pemakai sekaligus, dimana para pemakai interaktif berkomunikasi lewat terminal secara on-line ke komputer, maka sistem operasi menjadi multi-user (di gunakan banyak pengguna sekali gus) dan multi- programming (melayani banyak program sekali gus).
Generasi Keempat (Pasca 1980an)
Dewasa ini, sistem operasi dipergunakan untuk jaringan komputer dimana pemakai menyadari keberadaan komputer- komputer yang saling terhubung satu sama lainnya. Pada masa ini para pengguna juga telah dinyamankan dengan Graphical User Interface yaitu antar-muka komputer yang berbasis grafis yang sangat nyaman, pada masa ini juga dimulai era komputasi tersebar dimana komputasi-komputasi tidak lagi berpusat di satu titik, tetapi dipecah dibanyak komputer sehingga tercapai kinerja yang lebih baik.

c. Macam-Macam Sistem Operasi
Sistem Operasi jenisnya banyak sekali, kita tinggal memilih jenis apa yang akan digunakan di komputer kita. Mulai dari yang berlisensi sampai dengan yang gratis (open source), diantaranya adalah :
1. DOS
2. Windows, beberapa versi windows : Windows 95
Windows 98
Windows 2000 Profesional
Windows 2003
Windows XP Windows Vista
3. Linux, macam-macam distro linux :
Redhat Fodore Core Mandrake Suse Knoppix
4. Apple System
5. Machintos


Perangkat Lunak Bahasa (Language Program), yaitu program yang digunakan untuk menerjemahkan instruksi-instruksi yang ditulis dalam bahasa pemrograman ke dalam bahasa mesin agar dapat diterima dan dipahami oleh komputer.
High Level Language (Bahasa Tingkat Tinggi)
Bahasa tingkat tinggi merupakan bahasa yang mudah dimengerti oleh siapa saja yang mau belajar, karena bahasa ini dibuat dengan menggunakan bahasa manusia sehari-hari. Bahasa tingkat tinggi saat ini biasa dimanfaatkan untuk membuat program-program aplikasi berbasiskan bisnis ataupun berbasiskan sains. Contoh dari bahasa tingkat tinggi adalah : Basic, dBase, Cobol, Pascal, C++, Visual Basic, Visual Foxpro, Delphi, PHP, dan masih banyak yanga lainnya.


Perangkat lunak aplikasi merupakan perangkat lunak yang biasa digunakan oleh siapa saja untuk membantu pekerjaannya. Perangkat lunak aplikasi dapat dengan mudah di install di dalam komputer kita. Perangkat lunak aplikasi dapat dikelompokkan menjadi dua macam, yaitu :

a. Program Aplikasi
Program aplikasi merupakan program yang langsung dibuat
oleh seorang programmer yang disesuaikan dengan
kebutuhan seseorang  ataupun untuk kebutuhan suatu perusahaan, biasanya menggunakan bantuan suatu bahasa pemrograman. Misalnya menggunakan visual basic, PHP ataupun bahasa pemrograman apa saja yang mendukung. Contoh dari program aplikasi adalah :
Program penggajian karyawan (Payroll) Program penjualan tiket pesawat/kapal Laut Program kasir
Program billing warnet/wartel

b. Program Paket
Program paket merupakan program khusus dalam paket- paket tertentu yang dibuat oleh software house ataupun
langsung bawaan dari suatu Sistem Operasi. Dibawah ini akan diberikan contoh macam-macam program aplikasi paket, yaitu :
Program pengolah kata, contohnya adalah : Microsoft Word, Open Writter, ChiWritter, Word Perfect, WordStar, K Writter, Amipro, dll.
Program pengolah angka, contohnya adalah : Microsoft
Excel, Open Calc, Quattro Pro, Lotus 123, dll. Program presentasi, contohnya adalah : Microsoft Power
point, Open Impres, Magic Point, Corel
Presentation, Apple Work, dll.
Program design grafis, contohnya adalah : Adobe
Photoshop, Corel Draw, Free Hand, Auto Cad, dll.
Program browser, contohnya adalah : Internet Explorer,
Modzilla Firefox, Opera, Netscape Communicator.
Program database, contohnya adalah : Microsoft Access,
Open Base, Visual Foxpro, Fox Base, Dbase I- IV, dll.
Program animasi, contohnya adalah : Macromedia Flash, Swish, dll.
Program multimedia, contohnya : Windows Media Player, WinAmp, Cyberlink, Real Player, DVD Player,

          A Beginner’s Guide to #EdTech Open Source Software   
Did you ever explore a learning possibility only to realize that the software to make it happen costs a lot of money? Do you want to write music but can’t afford Finale or Sibelius? Do you […]
          Xavier Mertens: FIRST TC Amsterdam 2017 Wrap-Up   

Here is my quick wrap-up of the FIRST Technical Colloquium hosted by Cisco in Amsterdam. This is my first participation to a FIRST event. FIRST is an organization helping in incident response as stated on their website:

FIRST is a premier organization and recognized global leader in incident response. Membership in FIRST enables incident response teams to more effectively respond to security incidents by providing access to best practices, tools, and trusted communication with member teams.

The event was organized at Cisco office. Monday was dedicated to a training about incident response and the two next days were dedicated to presentations. All of them focussing on the defence side (“blue team”). Here are a few notes about interesting stuff that I learned.

The first day started with two guys from Facebook: Eric Water @ Matt Moren. They presented the solution developed internally at Facebook to solve the problem of capturing network traffic: “PCAP don’t scale”. In fact, with their solution, it scales! To investigate incidents, PCAPs are often the gold mine. They contain many IOC’s but they also introduce challenges: the disk space, the retention policy, the growing network throughput. When vendors’ solutions don’t fit, it’s time to built your own solution. Ok, only big organizations like Facebook have resources to do this but it’s quite fun. The solution they developed can be seen as a service: “PCAP as a Service”. They started by building the right hardware for sensors and added a cool software layer on top of it. Once collected, interesting PCAPs are analyzed using the Cloudshark service. They explained how they reached top performances by mixing NFS and their GlusterFS solution. Really a cool solution if you have multi-gigabits networks to tap!

The next presentation focused on “internal network monitoring and anomaly detection through host clustering” by Thomas Atterna from TNO. The idea behind this talk was to explain how to monitor also internal traffic. Indeed, in many cases, organizations still focus on the perimeter but internal traffic is also important. We can detect proxies, rogue servers, C2, people trying to pivot, etc. The talk explained how to build clusters of hosts. A cluster of hosts is a group of devices that have the same behaviour like mail servers, database servers, … Then to determine “normal” behaviour per cluster and observe when individual hosts deviate. Clusters are based on the behaviour (the amount of traffic, the number of flows, protocols, …). The model is useful when your network is quite close and stable but much more difficult to implement in an “open” environment (like universities networks).
Then Davide Carnali made a nice review of the Nigerian cybercrime landscape. He explained in details how they prepare their attacks, how they steal credentials, how they deploy the attacking platform (RDP, RAT, VPN, etc). The second part was a step-by-step explanation how they abuse companies to steal (sometimes a lot!) of money. An interesting fact reported by Davide: the time required between the compromisation of a new host (to drop malicious payload) and the generation of new maldocs pointing to this host is only… 3 hours!
The next presentation was performed by Gal Bitensky ( Minerva):  “Vaccination: An Anti-Honeypot Approach”. Gal (re-)explained what the purpose of a honeypot and how they can be defeated. Then, he presented a nice review of ways used by attackers to detect sandboxes. Basically, when a malware detects something “suspicious” (read: which makes it think that it is running in a sandbox), it will just silently exit. Gal had the idea to create a script which creates plenty of artefacts on a Windows system to defeat malware. His tool has been released here.
Paul Alderson (FireEye) presented “Injection without needles: A detailed look at the data being injected into our web browsers”. Basically, it was a huge review of 18 months of web-­inject and other configuration data gathered from several botnets. Nothing really exciting.
The next talk was more interesting… Back to the roots: SWITCH presented their DNS Firewall solution. This is a service they provide not to their members. It is based on DNS RPZ. The idea was to provide the following features:
  • Prevention
  • Detection
  • Awareness

Indeed, when a DNS request is blocked, the user is redirected to a landing page which gives more details about the problem. Note that this can have a collateral issue like blocking a complete domain (and not only specific URLs). This is a great security control to deploy. Note that RPZ support is implemented in many solutions, especially Bind 9.

Finally, the first day ended with a presentation by Tatsuya Ihica from Recruit CSIRT: “Let your CSIRT do malware analysis”. It was a complete review of the platform that they deployed to perform more efficient automatic malware analysis. The project is based on Cuckoo that was heavily modified to match their new requirements.

The second day started with an introduction to the FIRST organization made by Aaron Kaplan, one of the board members. I liked the quote given by Aaron:

If country A does not talk to country B because of ‘cyber’, then a criminal can hide in two countries

Then, the first talk was really interesting: Chris Hall presented “Intelligence Collection Techniques“. After explaining the different sources where intelligence can be collected (open sources, sinkholes, …), he reviewed a serie of tools that he developed to help in the automation of these tasks. His tools addresses:
  • Using the Google API, VT API
  • Paste websites (like
  • YARA rules
  • DNS typosquatting
  • Whois queries

All the tools are available here. A very nice talk with tips & tricks that you can use immediately in your organization.

The next talk was presented by a Cisco guy, Sunil Amin: “Security Analytics with Network Flows”. Netflow isn’t a new technology. Initially developed by Cisco, they are today a lot of version and forks. Based on the definition of a “flow”: “A layer 3 IP communication between two endpoints during some time period”, we got a review the Netflow. Netflow is valuable to increase the visibility of what’s happening on your networks but it has also some specific points that must be addressed before performing analysis. ex: de-duplication flows. They are many use cases where net flows are useful:
  • Discover RFC1918 address space
  • Discover internal services
  • Look for blacklisted services
  • Reveal reconnaissance
  • Bad behaviours
  • Compromised hosts, pivot
    • HTTP connection to external host
    • SSH reverse shell
    • Port scanning port 445 / 139
I would expect a real case where net flow was used to discover something juicy. The talk ended with a review of tools available to process net flow data: SiLK, nfdump, ntop but log management can also be used like the ELK stack or Apache Spot. Nothing really new but a good reminder.
Then, Joel Snape from BT presented “Discovering (and fixing!) vulnerable systems at scale“. BT, as a major player on the Internet, is facing many issues with compromized hosts (from customers to its own resources). Joel explained the workflow and tools they deployed to help in this huge task. It is based on the following circle: Introduction,  data collection, exploration and remediation (the hardest part!).
I like the description of their “CERT dropbox” which can be deployed at any place on the network to perform the following tasks:
  • Telemetry collection
  • Data exfiltration
  • Network exploration
  • Vulnerability/discovery scanning
An interesting remark from the audience: ISP don’t have only to protect their customers from the wild Internet but also the Internet from their (bad) customers!
Feike Hacqueboard, from TrendMicro, explained:  “How political motivated threat actors attack“. He reviewed some famous stories of compromised organizations (like the French channel TV5) then reviewed the activity of some interesting groups like C-Major or Pawn Storm. A nice review of the Yahoo! OAuth abuse was performed as well as the tab-nabbing attack against OWA services.
Jose Enrique Hernandez (Zenedge) presented “Lessons learned in fighting Targeted Bot Attacks“. After a quick review of what bots are (they are not always malicious – think about the Google crawler bot), he reviewed different techniques to protect web resources from bots and why they often fail, like the JavaScript challenge or the Cloudflare bypass. These are “silent challenges”. Loud challenges are, by examples, CAPTCHA’s. Then Jose explained how to build a good solution to protect your resources:
  • You need a reverse proxy (to be able to change quests on the fly)
  • LUA hooks
  • State db for concurrency
  • Load balancer for scalability
  • fingerprintjs2 / JS Challenge

Finally, two other Cisco guys, Steve McKinney & Eddie Allan presented “Leveraging Event Streaming and Large Scale Analysis to Protect Cisco“. CIsco is collecting a huge amount of data on a daily basis (they speak in Terabytes!). As a Splunk user, they are facing an issue with the indexing licence. To index all these data, they should have extra licenses (and pay a lot of money). They explained how to “pre-process” the data before sending them to Splunk to reduce the noise and the amount of data to index.
The idea is to pub a “black box” between the collectors and Splunk. They explained what’s in this black box with some use cases:
  • WSA logs (350M+ events / day)
  • Passive DNS (7.5TB / day)
  • Users identification
  • osquery data

Some useful tips that gave and that are valid for any log management platform:

  • Don’t assume your data is well-formed and complete
  • Don’t assume your data is always flowing
  • Don’t collect all the things at once
  • Share!

Two intense days full of useful information and tips to better defend your networks and/or collect intelligence. The slides should be published soon.

[The post FIRST TC Amsterdam 2017 Wrap-Up has been first published on /dev/random]

          Why Gutenberg?   
At WordCamp Europe 2017, Matt Mullenweg, co-founder of the WordPress open source project, announced that Gutenberg was available as a plugin for testing. In the past few weeks, members of the community have published their experiences with the new editor. Some of the reviews I’ve read so far include: Random (more...)
          Automattic to Renew Efforts on Underscores, Retire Components Starter-Theme Generator   
For the past several months we have received inquiries about Automattic’s open source Underscores starter theme. After six months of no commits to the GitHub repository and pull requests left unanswered, users and contributors were beginning to wonder whether the project was abandoned. After contacting Automattic to get a status (more...)
          10 Most Important Open Source Networking Projects   

Networking vendors use open source projects as platforms for enterprise projects, or as the underlying technology for some of the world's largest networks.

          acs.R useRs: Share your success stories   
Have you been using the acs.R package to download and analyze Census data in your work? Do you have a story you’d be willing to share, to help us promote the package and show off all the cool ways people are using open source tools to make sense of data and help inform communities, policy-makers, […]
          Python & JavaScript Developer - Odoo - Walloon Brabant, Namur   
Join us and help disrupt the enterprise market! With a small team of smart people, we released the most disruptive enterprise management software in the world.Odoo is fully open source, super easy, full featured (3000+ apps) and its online offer is 3 times cheaper than traditional competitors like SAP and MS Dynamics. Join us, we offer you an extraordinary chance to learn, to develop and to be part of an exciting experience and team. Responsibilities Develop...
          Open Source Community, Simplified   
Growing and maintaining an open-source community depends essentially on three things: Getting people interested in contributing Removing the barriers to entering the project and contributing Retaining contributors so that they keep contributing If you can get people interested, then have them actually contribute, and then have them stick around, you have a community. Otherwise, you … Continue reading
          Top 10 Reasons To Work On Open Source (In a California Accent)   
So, as a little digression from our normal content, I felt like writing a list of the top 10 reasons to work on open-source software…but being a born Californian, I felt I had to pay a little respect to my roots. So here we have the top 10 reasons to work on open-source…as said by, … Continue reading
          Post #73   
QUOTE(aspire2oo6 @ Dec 28 2012, 08:12 AM)
they have no choice but to keep boosting the specification to attract customers. If you notice iphone been still dual core until today still selling like hot cakes.

Android competition
China brand, Samsung, HTC, Sony and many many more

Iphone competition
FUrthermore iphone have controlled price thats why they retain their reseller value

Agreed with you. Apple has no competition thats why their still build same spec with old hardware devices and only with new name. Until now, apple market never drop. No others brands using ios, but too many brands using android because android is open source project..
          Episode 97: The novel strategy of making money, and investing to do so - Amazon + Whole Foods   
Looks like we’ll be getting cheaper organic food what with Amazon buying Whole Foods. What exactly is the strategy at play here, though? Other than the obvious thing of doing online groceries, how is Amazon advantaged here such that others (like Wal-mart), can’t simply do this themselves. We go over these questions and how they related to M&A in general. Plus recommendations and some podcast meta talk. Mid-roll This episode is sponsored by Casper, who’s looking for some good senior SREs ( If you’re into building out and managing infrastructure that keeps code running and makes sure you can sleep soundly at night, check out the job listing, apply (, and be sure to mention that you heard about it on Software Defined Talk. According to Glassdoor reviews (,17.htm), it’s a damn fine place to work. You can also just email and browse all their openings at ( LOOK, MA! I PUT IN DATES! DevOpsDays Minneapolis, July 25 to 26th: get 20% off registration with the code SDT ( (Thanks, Bridget!). SpringDays ( - Atlanta (July 18th to 19th) ( Matt will be at: DevSecOps at RSA Conf APJ ( Sydney Chef Meetup August 1st ( Auckland AWS User Community August 3rd ( Brisbane Azure User Group October 11 ( Podcast meta-talk to be able to track what you listen to ( Just paying for podcasts. $220m+ estimated TAM ( We have a Casper ad! Amazon Buys Whole Foods This was not covered in the Mary Meeker slide-fest. Coté’s notebook on the topic ( Stratechery on WF Acquisition ( Exponent Podcast ( What exactly are the barriers to entry here for other grocery stores. The business: online, and just the grocery store on it’s the 460+ physical stores for other goods? Barriers to entry, Amazon buyers (Whole Foods looks good now?), culture clash?, HEB love, private label BONUS LINKS! Not covered in episode. Gartner Magic Quadrant for IAAS is Here! Larry D. ( Once again, what a change from way back when: CRN ( The Register ( Johnny Leadgen can get a copy ( On Oracle: “Gartner warns potential customers to be cautious of high-pressure sales tactics.” How Microsoft Is Shifting Focus to Open Source Link ( “Chef is used to manage thousands of nodes internally across Azure, Office 365 and Bing.” Amazon Eyeing Slack? Link ( “Buying Slack would help Seattle-based Amazon bolster its enterprise services as it seeks to compete with rivals like Microsoft Corp. and Alphabet Inc.’s Google.” Walmart Buys Bonobo I’ve got a Bonobo suit I really like ( They had ModCloth and some others. Their M&A strategy has really shifted of late. Walmart Sez Get Off the AWS Finally a reason for multi-cloud ( BigCo’s gonna bully that supply-chain. What’s Wrong with Jenkins? Jenkins is the Nagios of CI/CD ( “No toolchain is perfect, but you can achieve software delivery perfection (or something close to it, at least) when you implement the right culture.” Tools don’t substitute culture. Oracle’s Swinging For the Fences (and missing) Link ( “He was also unwilling for Specsavers to become a guinea pig for Oracle's cloud.” Ubuntu Mobile Post Mortem Not much strategy… ( Serverless and the Death of DevOps Link ( Spoiler: “DevOps is the ultimate reactive, or event-driven, tech use case. It’s not going anywhere” State of DevOps 2017 Report Johnny Leadgen to the rescue (! Commercial Open Source Software Companies Link ( A bit of sourcing on the numbers would be valuable Glad Chef’s not on the list, wouldn’t want to comment on the numbers Cloud Foundry Summit A whole mess of videos! 121 of them. ( Heptio Out of Stealth Mode with K8s Management Tool TheNewStack covere ( Official page ( File under “It didn’t already do that. I see.” Not sure this qualifies as “coming out of stealth”, everyone knows they work on open source K8s. I’m not seeing a monetization strategy yet beyond support & training. Not that there’s anything wrong with that, but they raised $8.5 for their A-round BMC Software Exploring Merging with CA STOP THE PRESSES! TERRIBLE MEETS TERRIBLE ( So far, no confirmation, but ( “While the two companies were once dominant in the systems management industry, the analyst notes that CA and BMC have 7.5% and 8% share respectively as of FY16 which combined would put them on a near even footing with IBM, the largest vendor, at 15%.” “There are also many other vendors in the market including MSFT (7%) and NOW (5%) so anti trust concerns should not be an issue.” High Level Kubernetes Overview Link ( “Basically Kubernetes is a distributed system that runs programs (well, containers) on computers. You tell it what to run, and it schedules it onto your machines.” More on Service Meshes From James Governor, RedMonk ( Recommendations Brandon: The Scholar and the Drop Out podcast (; Coté’s add-on: Karl Lagerfella’s day (, no exercise and long night-shirts. Matt: Commando: Johnny Ramone’s Autobiography ( Coté: Gulf Shores, Alabama; Hillbilly Elegy ( and “The Dead Pig Collector.” (
          Visualize your domino data using Open Source java    

Visualize your domino data using Open Source java

This code shows how easy it is to create a diagram/chart and to save it as a file.
The code asks explorer.exe to display the image. (windows only)
Its just an example, feel free to do what ever you want with it.
Great for web applications

Example image,


  1. Go to JFree.Org
  2. Download the latest version of JFreeChart
  3. Extract the files to a temp directory
  4. Create a new notes java agent.
  5. Set Runtime target to "none"
  6. Click Edit Project and add the jar files found in the temp directory where you extacted JFreeChart to your agent.
  7. Paste the code showed in Example 1
Example 1
import lotus.domino.*;
import org.jfree.chart.*;
import java.util.*;

//+46(0)706 - 33 23 68
public class JavaAgent extends AgentBase

	public void NotesMain()

			Session session = getSession();
			AgentContext agentContext = session.getAgentContext();
			HashMap map = new HashMap();
			map.put("Pierre", new Integer(178));
			map.put("Dick", new Integer(87));
			map.put("Ola", new Double(200));
			map.put("Random", new Double( (Math.random() * 200)));
			writeChartToDisk("Diagram", "c:\\test.jpg", map);

		} catch (Exception e)
	private void writeChartToDisk(String title, String fileName, Map map)
		//put map values in to a DefaultPieDataset
		Iterator iterator = map.keySet().iterator();
		DefaultPieDataset pieDataset = new DefaultPieDataset();
		while (iterator.hasNext())
			Object o =;
			Object o2 = map.get(o);
			pieDataset.setValue((String) o,  (Number) o2);
		//Create the actual chart
		JFreeChart chart = ChartFactory.createPieChart(title, pieDataset, true, true, true);
		//Write chart to disk as JPG file, and ask explorer.exe to show it.
		//You could extract the file to the html directory of the domino server or attach to a notes-document
		  	FileOutputStream fos = new FileOutputStream( fileName);
			ChartUtilities.writeChartAsJPEG(fos, 1, chart, 750, 400);
			Runtime run = Runtime.getRuntime();
			run.exec("explorer.exe  " + fileName );

		} catch (Exception e)


          OAuth2, JWT, Open-ID Connect and other confusing things   


If feel I have to start this post with an important disclaimer: don’t trust too much what I’m about to say.
The reason why I say this is because we are discussing security. And when you talk about security anything other then 100% correct statements risks to expose you to some risk of any sort.
So, please, read this article keeping in mind that your source of truth should be the official specifications, and that this is just an overview that I use to recap this topic in my own head and to introduce it to beginners.


I have decided to write this post because I have always found OAuth2 confusing. Even now that I know a little more about it, I found some of its part puzzling.
Even if I was able to follow online tutorials from the likes of Google or Pinterest when I need to fiddle with their APIs, it always felt like some sort of voodoo, with all those codes and Bearer tokens.
And each time they mentioned I could make my own decisions for specific steps, choosing among the standard OAuth2 approach, my mind tended to go blind.

I hope I’ll be able to fix some idea, so that from now on, you will be able to follow OAuth2 tutorials with more confidence.

What is OAuth2?

Let’s start from the definition:

OAuth 2 is an authorisation framework that enables applications to obtain limited access to user accounts on an HTTP service.

The above sentence is reasonably understandable , but we can improve things if we pinpoint the chose terms.

The Auth part of the name, reveals itself to be Authorisation(it could have been Authentication; it’s not).
Framework can be easily overlooked since the term framework is often abused; but the idea to keep here is that it’s not necessarily a final product or something entirely defined. It’s a toolset. A collection of ideas, approaches, well defined interactions that you can use to build something on top of it!
It enable applications to obtain limited access. The key here is that it enables applications not humans.
limited access to user accounts is probably the key part of the definition that can help you to remember and to explain what OAuth2 is:
the main aim is to allow a user to delegate access to a user owned resource. Delegating it to an application.

OAuth2 is about delegation.

It’s about a human, instructing a software to do something on her behalf.
The definition also mentions limited access, so you can imagine of being able to delegate just part of your capabilities.
And it concludes mentioning HTTP services. This authorisation-delegation, happens on an HTTP service.

Delegation before OAuth2

Now that the context should be clearer, we could ask ourselves: How were things done before OAuth2 and similar concepts came out?

Well, most of the time, it was as bad as you can guess: with a shared secret.

If I wanted a software A to be granted access to my stuff on server B, most of the time the approach was to give my user/pass to software A, so that it could use it on my behalf.
This is still a pattern you can see in many modern software, and I personally hope it’s something that makes you uncomfortable.
You know what they say: if you share a secret, it’s no longer a secret!

Now imagine if you could instead create a new admin/password couple for each service you need to share something with. Let’s call them ad-hoc passwords.
They are something different than your main account for a specific service but they still allow to access the same service as they were you. You would be able, in this case, to delegate, but you would still be responsible of keeping track of all this new application-only accounts you need to create.

OAuth2 - Idea

Keeping in mind that the business problem that we are trying to solve is the “delegation” one, we want to extend the ad-hoc password idea to take away from the user the burden of managing these ad-hoc passwords.
OAuth2 calls these ad-hoc passwords tokens.
Tokens, are actually more than that, and I’ll try to illustrate it, but it might be useful to associate them to this simpler idea of an ad-hoc password to begin with.

OAuth2 - Core Business

Oauth 2 Core Business is about:

  • how to obtain tokens

OAuth2 - What’s a token?

Since everything seems to focus around tokens, what’s a token?
We have already used the analogy of the ad-hoc password, that served us well so far, but maybe we can do better.
What if we look for the answer inside OAuth2 specs?
Well, prepare to be disappointed. OAuth2 specs do not give you the details of how to define a token. Why is this even possible?
Remember when we said that OAuth2 was “just a framework”? Well, this is one of those situation where that definition matters!
Specs just tell you the logical definition of what a token is and describe some of the capabilities it needs to posses.
But at the end, what specs say is that a token is a string. A string containing credentials to access a resource.
It gives some more detail, but it can be said that most of the time, it’s not really important what’s in a token. As long as the application is able to consume them.

A token is that thing, that allows an application to access the resource you are interested into.

To point out how you can avoid to overthink what a token is, specs also explicitly say that “is usually opaque to the client”!
They are practically telling you that you are not even required to understand them!
Less things to keep in mind, doesn’t sound bad!

But to avoid turning this into a pure philosophy lesson, let’s show what a token could be

   "access_token": "363tghjkiu6trfghjuytkyen",
   "token_type": "Bearer"

A quick glimpse show us that, yeah, it’s a string. JSON-like, but that’s probably just because json is popular recently, not necessarily a requirement.
We can spot a section with what looks like a random string, an id: 363tghjkiu6trfghjuytkyen. Programmers know that when you see something like this, at least when the string is not too long, it’s probably a sign that it’s just a key that you can correlate with more detailed information, stored somewhere else.
And that iss true also in this case.
More specifically, the additional information it will be the details about the specific authorisation that that code is representing.

But then another thing should capture your attention: "token_type": "Bearer".

Your reasonable questions should be: what are the characteristics of a Bearer token type? Are there other types? Which ones?

Luckily for our efforts to keep things simple, the answer is easy ( some may say, so easy to be confusing… )

Specs only talk about Bearer token type!

Uh, so why the person who designed a token this way, felt that he had to specify the only known value?
You might start seeing a pattern here: because OAuth2 is just a framework!
It suggests you how to do things, and it does some of the heavy lifting for you making some choice, but at the end, you are responsible of using the framework to build what you want.
We are just saying that, despite here we only talk about Bearer tokens, it doesn’t mean that you can’t define your custom type, with a meaning you are allowed to attribute to it.

Okay, just a single type. But that is a curious name. Does the name imply anything relevant?
Maybe this is a silly question, but for non-native English speakers like me, what Bearer means in this case could be slightly confusing.

Its meaning is quite simple actually:

A Bearer token is something that if you have a valid token, we trust you. No questions asked.

So simple it’s confusing. You might be arguing: “well, all the token-like objects in real world work that way: if I have valid money, you exchange them for the good you sell”.

Correct. That’s a valid example of a Bearer Token.

But not every token is of kind Bearer. A flight ticket, for example, it’s not a Bearer token.
It’s not enough having a ticket to be allowed to board on a plane. You also need to show a valid ID, so that your ticket can be matched with; and if your name matches with the ticket, and your face match with the id card, you are allowed to get on board.

To wrap this up, we are working with a kind of tokens, that if you posses one of them, that’s enough to get access to a resource.

And to keep you thinking: we said that OAuth2 is about delegation. Tokens with this characteristic are clearly handy if you want to pass them to someone to delegate.

A token analogy

Once again, this might be my non-native English speaker background that suggests me to clarify it.
When I look up for the first translation of token in Italian, my first language, I’m pointed to a physical object.
Something like this:


That, specifically, is an old token, used to make phone calls in public telephone booths.
Despite being a Bearer token, its analogy with the OAuth2 tokens is quite poor.
A much better picture has been designed by Tim Bray, in this old post: An Hotel Key is an Access Token
I suggest you to read directly the article, but the main idea, is that compared to the physical metal coin that I have linked first, your software token is something that can have a lifespan, can be disabled remotely and can carry information.

Actors involved

These are our actors:

  • Resource Owner
  • Client (aka Application)
  • Authorisation Server
  • Protected Resource

It should be relatively intuitive: an Application wants to access a Protected Resource owned by a Resource Owner. To do so, it requires a token. Tokens are emitted by an Authorisation Server, which is a third party entity that all the other actors trust.

Usually, when a read something new, I tend to quickly skip through the actors of a system. Probably I shouldn’t, but most of the time, the paragraph that talks describe, for example, a “User”, ends up using many words to just tell me that it’s just, well, a user… So I try to look for the terms that are less intuitive and check if some of them has some own characteristic that I should pay particular attention to.

In OAuth2 specific case, I feel that the actor with the most confusing name is Client.
Why do I say so? Because, in normal life (and in IT), it can mean many different things: a user, a specialised software, a very generic software…

I prefer to classify it in my mind as Application.

Stressing out that the Client is the Application we want to delegate our permissions to. So, if the Application is, for example, a server side web application we access via a browser, the Client is not the user or the browser itself: the client is the web application running in its own environment.

I think this is very important. Client term is all over the place, so my suggestion is not to replace it entirely, but to force your brain to keep in mind the relationship Client = Application.

I also like to think that there is another not official Actor: the User-Agent.

I hope I won’t confuse people here, because this is entirely something that I use to build my mental map.
Despite not being defined in the specs, and also not being present in all the different flows, it can help to identify this fifth Actor in OAuth2 flows.
The User-Agent is most of the time impersonated by the Web Browser. Its responsibility is to enable an indirect propagation of information between 2 systems that are not talking directly each other.
The idea is: A should talk to B, but it’s not allowed to do so. So A tells C (the User-Agent) to tell B something.

It might be still a little confusing at the moment, but I hope I’ll be able to clarify this later.

OAuth2 Core Business 2

OAuth2 is about how to obtain tokens.

Even if you are not an expert on OAuth2, as soon as someone mentions the topic, you might immediately think about those pages from Google or the other major service providers, that pop out when you try to login to a new service on which you don’t have an account yet, and tell Google, that yeah, you trust that service, and that you want to delegate some of your permissions you have on Google to that service.

This is correct, but this is just one of the multiple possibly interactions that OAuth2 defines.

There are 4 main ones it’s important you know. And this might come as a surprise if it’s the first time you hear it:
not all of them will end up showing you the Google-like permissions screen!
That’s because you might want to leverage OAuth2 approach even from a command line tool; maybe even without any UI at all, capable of displaying you an interactive web page to delegate permissions.

Remember once again: the main goal is to obtain tokens!

If you find a way to obtain one, the “how” part, and you are able to use them, you are done.

As we were saying, there are 4 ways defined by the OAuth2 framework. Some times they are called flows, sometimes they are called grants.
It doesn’t really matter how you call them. I personally use flow since it helps me reminding that they differ one from the other for the interactions you have to perform with the different actors to obtain tokens.

They are:

  • Authorisation Code Flow
  • Implicit Grant Flow
  • Client Credential Grant Flow
  • Resource Owner Credentials Grant Flow (aka Password Flow)

Each one of them, is the suggested flow for specific scenarios.
To give you an intuitive example, there are situation where your Client is able to keep a secret(a server side web application) and other where it technically can’t (a client side web application you can entirely inspect it’s code with a browser).
Environmental constraints like the one just described would make insecure ( and useless ) some of the steps defined in the full flow. So, to keep it simpler, other flows have been defined when some of the interactions that were impossible or that were not adding any security related value, have been entirely skipped.

OAuth2 Poster Boy: Authorisation Code Flow

We will start our discussion with Authorisation Code Flow for three reasons:

  • it’s the most famous flow, and the one that you might have already interacted with (it’s the Google-like delegation screen one)
  • it’s the most complex, articulated and inherently secure
  • the other flows are easier to reason about, when compared to this one

The Authorisation Code Flow, is the one you should use if your Client is trusted and is able to keep a secret. This means a server side web application.

How to get a token with Authorisation Code Flow

  1. All the involved Actors trust the Authorisation Server
  2. User(Resource Owner) tells a Client(Application) to do something on his behalf
  3. Client redirects the User to an Authorisation Server, adding some parameters: redirect_uri, response_type=code, scope, client_id
  4. Authorisation Server asks the User if he wishes to grant Client access some resource on his behalf(delegation) with specific permissions(scope).
  5. User accepts the delegation request, so the Auth Server sends now an instruction to the User-Agent(Browser), to redirect to the url of the Client. It also injects a code=xxxxx into this HTTP Redirect instruction.
  6. Client, that has been activated by the User-Agent thanks to the HTTP Redirect, now talks directly to the Authorisation Server (bypassing the User-Agent). client_id, client_secret and code(that it had been forwarded).
  7. Authorisation Server returns the Client (not the browser) a valid access_token and a refresh_token

This is so articulated that it’s also called the OAuth2 dance!

Let’s underline a couple of points:

  • At step 2, we specify, among the other params, a redirect_uri. This is used to implement that indirect communication we anticipated when we have introduced the User-Agent as one of the actors. It’s a key information if we want to allow the Authorisation Server to forward information to the Client without a direct network connection open between the two.
  • the scope mentioned at step 2 is the set of permissions the Client is asking for
  • Remember that this is the flow you use when the client is entirely secured. It’s relevant in this flow at step 5, when the communication between the Client and the Authorisation Server, avoids to pass through the less secure User-Agent (that could sniff or tamper the communication). This is also why, it makes sense that for the Client to enable even more security, that is to send its client_secret, that is shared only between him and the Authorisation Server.
  • The refresh_token is used for subsequent automated calls the Client might need to perform to the Authorisation Server. When the current access_token expires and it needs to get a new one, sending a valid refresh_token allows to avoid asking the User again to confirm the delegation.

OAuth2 Got a token, now what?

OAuth2 is a framework remember. What does the framework tells me to do now?

Well, nothing. =P

It’s up to the Client developer.

She could (and often should):

  • check if token is still valid
  • look up for detailed information about who authorised this token
  • look up what are the permissions associated to that token
  • any other operation that it makes sense to finally give access to a resource

They are all valid, and pretty obvious points, right?
Does the developer have to figure out on her own the best set of operations to perform next?
She definitely can. Otherwise she can leverage another specification: OpenIDConnect(OIDC). More on this later.

OAuth2 - Implicit Grant Flow

It’s the flow designed for Client application that can’t keep a secret. An obvious example are client side HTML applications. But even any binary application whose code is exposed to the public can be manipulated to extract their secrets.
Couldn’t we have re-used the Authorisation Code Flow?
Yes, but… What’s the point of step 5) if secret is not a secure secret anymore? We don’t get any protection from that additional step!
So, Implicit Grant Flow, is just similar to Authorisation Code Flow, but it doesn’t perform that useless step 5.
It aims to obtain directly access_tokens without the intermediate step of obtaining a code first, that will be exchanged together with a secret, to obtain an access_token.

It uses response_type=token to specific which flow to use while contacting the Authorisation Server.
And also that there is no refresh_token. And this is because it’s assumed that user sessions will be short (due to the less secure environment) and that anyhow, the user will still be around to re-confirm his will to delegate(this was the main use case that lead to the definition of refresh_tokens).

OAuth2 - Client Credential Grant Flow

What if we don’t have a Resource Owner or if he’s indistinct from the Client software itself (1:1 relationship) ?
Imagine a backend system that just wants to talk to another backend system. No Users involved.
The main characteristic of such an interaction is that it’s no longer interactive, since we no longer have any user that is asked to confirm his will to delegate something.
It’s also implicitly defining a more secure environment, where you don’t have to be worried about active users risking to read secrets.

Its type is response_type=client_credentials.

We are not detailing it here, just be aware that it exist, and that just like the previous flow, it’s a variation, a simplification actually, of the full OAuth dance, that you are suggested to use if your scenario allows that.

OAuth2 - Resource Owner Credentials Grant Flow (aka Password Flow)

Please raise your attention here, because you are about to be confused.

This is the scenario:
The Resource Owner, has an account on the Authorisation Server. The Resource Owner gives his account details to the Client. The Client use this details to authenticate to the Authorisation Server…


If you have followed through the discussion you might be asking if I’m kidding you.
This is exactly the anti-pattern we tried to move away from at the beginning of our OAuth2 exploration!

How is it possible to find it listed here as possible suggested flow?

The answer is quite reasonable actually: It’s a possible first stop for migration from a legacy system.
And it’s actually a little better than the shared password antipattern:
The password is shared but that is just a mean to start the OAuth Dance used to obtain tokens.

This allows OAuth2 to put its foot into the door, if we don’t have better alternatives.
It introduces the concept of access_tokens, and it can be used until the architecture will be mature enough (or the environment will change) to allow a better and more secure Flow to obtain tokens.
Also, please notice that now tokens are the ad-hoc password that reaches the Protected Resource system, while in the fully shared password antipattern, it was our password that needs to be forwarded.

So, far from ideal, but at least we justified by some criteria.

How to chose the best flow?

There are many decision flow diagrams on the internet. One of those that I like the most is this one:

OAuth2 Flows from

It should help you to remember the brief description I have gave you here and to chose the easiest flow based on your environment.

OAuth2 Back to tokens - JWT

So, we are able to get tokens now. We have multiple ways to get them. We have not been told explicitly what to do with them, but with some extra effort and a bunch of additional calls to the Authorisation Server we can arrange something and obtain useful information.

Could things be better?

For example, we have assumed so fare that our tokens might look like this:

   "access_token": "363tghjkiu6trfghjuytkyen",
   "token_type": "Bearer"

Could we have more information in it, so to save us some round-trip to the Authorisation Server?

Something like the following would be better:

  "active": true,
  "scope": "scope1 scope2 scope3",
  "client_id": "my-client-1",
  "username": "paolo",
  "iss": "http://keycloak:8080/",
  "exp": 1440538996,
"roles" : ["admin", "people_manager"],
"favourite_color": "maroon",
... : ...

We’d be able to access directly some information tied to the Resource Owner delegation.

Luckily someone else had the same idea, and they came out with JWT - JSON Web Tokens.
JWT is a standard to define the structure of JSON based tokens representing a set of claims. Exactly what we were looking for!

Actually the most important aspect that JWT spec gives us is not in the payload that we have exemplified above, but in the capability to trust the whole token without involving an Authorizatin Server!

How is that even possible? The idea is not a new one: asymmetric signing (pubkey), defined, in the context of JWT by JOSE specs.

Let me refresh this for you:

In asymmetric signing two keys are used to verify the validity of information.
These two keys are coupled, but one is secret, known only to the document creator, while the other is public.
The secret one is used to calculate a fingerprint of the document; an hash.
When the document is sent to destination, the reader uses the public key, associated with the secret one, to verify if the document and the fingerprint he has received are valid.
Digital signing algorithms tell us that the document is valid, according to the public key, only if it’s been signed by the corresponding secret key.

The overall idea is: if our local verification passes, we can be sure that the message has been published by the owner of the secret key, so it’s implicitly trusted.

And back to our tokens use case:

We receive a token. Can we trust this token? We verify the token locally, without the need to contact the issuer. If and only if, the verification based on the trusted public key passes, we confirm that token is valid. No question asked. If the token is valid according to digital signage AND if it’s alive according to its declared lifespan, we can take those information as true and we don’t need to ask for confirmation to the Authorisation Server!

As you can imagine, since we put all this trust in the token, it might be savvy not to emit token with an excessively long lifespan:
someone might have changed his delegation preferences on the Authorisation Server, and that information might not have reached the Client, that still has a valid and signed token it can based its decision onto.
Better to keep things a little more in sync, emitting tokens with a shorter life span, so, eventual outdated preferences don’t risk to be trusted for long periods.

OpenID Connect

I hope this section won’t disappoint you, but the article was already long and dense with information, so I’ll keep it short on purpose.

OAuth2 + JWT + JOSE ~= OpenID Connect

Once again: OAuth2 is a framework.
OAuth2 framework is used in conjunction with JWT specs, JOSE and other ideas we are not going to detail here, the create OpenID Connect specification.

The idea you should bring back is that, more often you are probably interested into using and leveraging OpenID Connect, since it puts together the best of the approaches and idea defined here.
You are, yes, leveraging OAuth2, but you are now the much more defined bounds of OpenID Connect, that gives you richer tokens and support for Authentication, that was never covered by plain OAuth2.

Some of the online services offer you to chose between OAuth2 or OpenID Connect. Why is that?
Well, when they mention OpenID Connect, you know that you are using a standard. Something that will behave the same way, even if you switch implementation.
The OAuth2 option you are given, is probably something very similar, potentially with some killer feature that you might be interested into, but custom built on top of the more generic OAuth2 framework.
So be cautious with your choice.


If you are interested into this topic, or if this article has only confused you more, I suggest you to check OAuth 2 in Action by Justin Richer and Antonio Sanso.
On the other side, if you want to check your fresh knowledge and you want to try to apply it to an open source Authorisation Server, I will definitely recommend playing with Keycloak that is capable of everything that we have described here and much more!


After running a post looking for contract and permanent developers in the UK, Damian blogged about his experiences with recruitment consultants and I’ve become so incensed that I feel the need to rant, hence this post.


I know that when I feel a very strong reaction to a particular topic that I have some emotional baggage associated with it, and as such I have a lot of unresolved anger towards recruitment consultants.  I’ll try and be balanced, and brief, and share some of my thoughts.


I would almost never recommend to any of my clients that they use recruitment consultants.  I would in fact only ever recommend it if they were absolutely desperate.  The reason for this is threefold:


First of all, managers of developer teams need to be very hands on with the recruitment process.  Outsourcing this work, because that’s what you’re doing with recruitment consultants, is a bad idea.  Yes, you may get a thousand really, really poor CVs through the door of which only one is gilt-edged, but personally I would rather go through them by hand than rely on an untrained professional to do this.  (Remember, a development manager probably knows about a thousand times more about software development than a salesperson who happens to be working in the software development field.)


Why?  Because I know what I’m looking for given the big picture.  If advertising for a senior C# developer and I get a CV come through for someone looking for a junior testing position, a recruitment consultant would instantly dismiss the junior candidate.  I may not want to employ him for the senior role, but if his CV stands out for whatever reason (i.e. my intuition is telling me there’s something interesting here), I might want to meet with him, see what he’s about and bring him into the organization in some other fashion. 


Recruitment consultants are only any good at matching keywords.  “Here’s a developer with six years Smalltalk experience, and he’s built some open source VB .NET projects in his spare time.  Uh, well my client is looking for C#, so let’s throw him away.”  Riiiiight.  So here’s a developer with shed loads of OO development experience, who happens to do .NET projects in his spare time.  .  Sounds like someone I’d like to meet.  However, a recruitment consultant would never let me have that chance.


Secondly, recruitment consultants only find people who have degrees.  Why?  Well, because if they say, “must have a 2:1” they can instantly cut down on 50% of the CVs they might otherwise get.  That saves the poor little lambs work! 


The best developers I’ve ever met have not had degrees.  I’m happy to admit that might be a weird anomaly, but I don’t want someone really good getting filtered out just to reduce the operating costs of an recruitment consultancy.


Thirdly, recruitment consultants cannot find really good people.  For me, a really good developer is the kind of developer who has a passionate interest in community.  They can either contribute to the community, or they are just part of that community. 


A recruitment consultant can’t find people in a community.  If I’m interviewing someone and she reels off a list of ten people who’s blogs she reads every week and I’ve never heard of them, or she tells me about some great stuff she read in Chris Sells’ Windows book I am about ten times more likely to hire her than someone who doesn’t give a damn about the community.  I mean, I guess everyone reading this knows how important I think community is (I’m an MVP after all), but it’s absolutely essential that any developer in my employ has an attitude of learning.  A strong interest in community is great evidence of this.  However, you can’t put “I read blogs” or “the top 10 books on XP I read over the past year” on a CV, therefore the recruitment consultant chucks this potential hire on the “reject” pile, even though she’s probably exactly the sort of person that I’d want to hire.


Worse than this is that a recruitment consultant can’t find people who contribute to the community.  Imagine how I’d feel interviewing someone who gives talks and writes  books and has a blog and runs a Web site?  I’d probably hire someone like that in a shot.  However, a recruitment consultant only gets paid if I hire through him.  If I can find the person directly and hire them directly, I can save myself several thousand pounds.


This used to happen to me all the time until I worked out what was going on.  If you submit your CV to a recruitment bastard (I told you I was angry!), they strip out everything that can help their client find you directly.  Written a book?  Well, that’ gets deleted because you can go find the author and from there, invite him to come into the interview directly, saving you 40%.  Written a magazine article?  Gone.  Gave a keynote at a developer conference? Gone!  If you take all that stuff out of my CV, it looks pretty anemic.  I’d only hire me if I could see the big picture.  And I, if I do say so myself, am a pretty good developer.


As a final point, recruitment is a continual process.  If I meet someone in the course of business, I keep their contact details safe just in case I need them for some reason.  The same should be true of potential employees.  Recruitment consultants don’t work on this principle – they are reactive to a manager’s immediate need for developers.  If the right way to do this is to always be looking for potential hires, ipso facto, recruitment consultants are of little use in this way too.


I feel a bit like I’m the angryCoder writing this, but well, I am angry!  I’m fed up with good people not being able to find good work because for some reason we’ve managed to construct our world such that untrained salespeople are blocking our access to the most vital resource we need… developers, developers, developers! 

          Cloud hosted CI for .NET projects   

Originally posted on:

Continuous integration (CI) is important. If you don’t have it set up…you should. There are a lot of different options available for hosting your own CI server, but they all require you to maintain your own infrastructure. If you’re a business, that generally isn’t a problem. However, if you have some open source projects hosted, for example on GitHub, there haven’t really been any options. That has changed with the latest release of AppVeyor, which bills itself as “Continuous integration for busy developers.” What’s different about AppVeyor is that it’s a hosted solution.

Why is that important? By being a hosted solution, it means that I don’t have to maintain my own infrastructure for a build server.

How does that help if you’re hosting an open source project? AppVeyor has a really competitive pricing plan. For an unlimited amount of public repositories, it’s free. That gives you a cloud hosted CI system for all of your GitHub projects for the cost of some time to set them up, which actually isn’t hard to do at all.

I have several open source projects (hosted at, so I signed up using my GitHub credentials. AppVeyor fully supported my two-factor authentication with GitHub, so I never once had to enter my password for GitHub into AppVeyor. Once it was done, I authorized GitHub and it instantly found all of the repositories I have (both the ones I created and the ones I cloned from elsewhere). You can even add “build badges” to your markdown files in GitHub, so anyone who visits your project can see the status of the lasted build.

Out of the box, you can simply select a repository, add the build project, click New Build and wait for the build to complete. You now have a complete CI server running for your project. The best part of this, besides the fact that it “just worked” with almost zero configuration is that you can configure it through a web-based interface which is very streamlined, clean and easy to use or you can use a appveyor.yml file. This means that you can define your CI build process (including any scripts that might need to be run, etc.) in a standard file format (the YAML format) and store it in your repository. The benefits to that are huge. The file becomes a versioned artifact in your source control system, so it can be branched, merged, and is completely transparent to anyone working on the project.

By the way, AppVeyor isn’t limited to just GitHub. It currently supports GitHub, BitBucket, Visual Studio Online, and Kiln.

I did have a few issues getting one of my projects to build, but the same day I posted the problem to the support forum a fix was deployed, and I had a functioning CI build about 5 minutes after that. Since then, I’ve provided some additional feature requests and had a few other questions, all of which have seen responses within a 24-hour period. I have to say that it’s easily been one of the best customer support experiences I’ve seen in a long time.

AppVeyor is still young, so it doesn’t yet have full feature parity with some of the older (more established) CI systems available,  but it’s getting better all the time and I have no doubt that it will quickly catch up to those other CI systems and then pass them. The bottom line, if you’re looking for a good cloud-hosted CI system for your .NET-based projects, look at AppVeyor.

          Senior Support Engineer - GridGain - Time, IL   
Technical support for GridGain and Apache Ignite products. Close interaction with GridGain development team and Apache Ignite open source community....
From GridGain - Wed, 10 May 2017 19:19:23 GMT - View all Time, IL jobs
          Learning Python (100% discount)   
Learn to code like a professional with Python – an open source, versatile, and powerful programming language, Learning Python has a dynamic and varied nature. It reads easily and lays a good foundation for those who are interested in digging deeper. What You Will Learn Get Python up and running on Windows, Mac, and Linux…
          PostgreSQL 9.3.1   
PostgreSQL is an open source object-relational database system.
          Free SCADA Software Download   

The following list provides the free open source SCADA softwares. 1. open plc project Supported OS: Windows License: Creative Commons Attribution-ShareAlike 4.0 International License. The OpenPLC Project tries to be exactly what its name states. It is a standard industrial controller, with sturdy hardware and real time response. It can be programmed with all the five ...

The post Free SCADA Software Download appeared first on Instrumentation Tools.

          How will IoT change your life?   

An IoT (Internet of Things) setup has devices connected to the internet catering to multiple use case scenarios. These can be monitoring of assets, executing tasks and services to support day to day human requirements, ensuring life & safety through alerts and responses, city infrastructure management through command control centers for emergency response, enabling efficient governance through process automation, provisioning healthcare and enabling sustainable energy management thereby addressing environment conservation concerns.

A platform which caters to all above use cases from devices and sensors to management functionalities qualifies to be a Smart city platform.

Cloud computing is a popular service that comes with many characteristics and advantages. Basically, cloud computing is like DIY (Do It Yourself) service wherein a user/consumer can subscribe to computing resources based on demand/requirement whilst the services are delivered entirely over the Internet.

IoT and Cloud computing go hand-in-hand though they are two different technologies which are already part of our life. Both being pervasive qualifies them as the internet of future. Cloud merged with IoT together is foreseen as new and disruptive paradigm.

Just like cloud is available as a service, whether it is infrastructure, platform or software .Similarly IoT is seen as every(thing) as a service for future since it also fulfills the smart city initiative. First and foremost requirements of any IoT platform for smart city is on demand self-service which enables usage based subscription to computing resources(hardware) that manage and run automation , platform functions, software features and algorithms that form part of city management infrastructure.

Characteristics of such an IoT on Cloud scenario are …

·     Broad network access- to enable any device connectivity whether it is laptop, tablet, nano/micro/pico gateways or actuators or sensors.
       Resource pooling - for on demand access of compute resources like assign identity to device in pool.
      Rapid elasticity - to enable quickly edit software features providing elastic computing- storage & networking demands.
      Measured service - pay for only resources and or services used based on duration/volume/quantity of usage.   

Advantage of any IoT Cloud setup is that it doesn’t involve upfront CAPEX from a service consumer point of view in terms of building entire infrastructure from ground zero. Rather it is based on subscribe-operate-scale-pay model. This enables stakeholders and decision maker’s instant access to actual environment which helps them gauge prospective investment and expenditure at the same time technology teams are geared up to anticipate which component of the IoT setup needs to be scaled rather than replicating entire setup to fulfill growing demands.

Dockers (which are basically containers having associated compute, storage and software module with runtime environments required to run software module of overall software) and Micro services (are independent services having own data persistence and dependency software elements which can run independently or provide service to monolithic systems) are some of the features that help manage scalability aspect of IoT platform on cloud catering to smart city use case. Individual modules and components within IoT platform can be preconfigured as Dockers and Micro services.  Once there is traction on the platform, respective Docker or Micro services gets provisioned to handle surge in data traffic thus individual functionality of the platform becomes horizontally scalable. Hence to address such ad-hoc scalability requirements, unlike monolithic systems wherein entire platform needs replication, here only individual module of the platform can be scaled which saves substantial resources and OPEX for stakeholders.

This platform architecture can be implemented on a cloud infrastructure reusing legacy hardware or over commodity computing infrastructure.

Any smart city deployment of IoT platform demands fail safe high availability setup. As a result computing infrastructure has to be clustered (grouping of similar functional units/modules within software system).With the surge in number of clusters and Dockers of each functional modules, managing such disparate clustered environments becomes a challenge. Technologies such as Kubernetes and Mesos address these challenges.

Mesos and Kubernetes enable efficient management, scalability and optimization of Dockers micro services and APIs which are exposed as PaaS or SaaS over cloud infrastructure, thereby fulfilling on- the- fly auto scaling demands from service consumers.

Pacific controlsGalaxy2021 platform built using open source technologies has adopted most of the above mentioned technologies and best practices. This forms a unique value proposition enabling early adoption to latest technology innovations in the open source world that is either related to IoT or cloud computing. Galaxy2021 platform is horizontally scalable and is capable to manage disparate IoT applications of various stakeholders. It can handle high volume data communication originating at a high frequency from various sensors devices and gateways installed across various smart city assets.

Galaxy2021 Platformhas been deployed and available on different cloud infrastructures in public, private and hybrid models catering to customers ranging from Government, Utility companies to OEMs across Middle-East, US and Oceana.

          VPS for open source project   

I'm developing a free open source project that's nearing complete and it's open source, and we're trying to find a home to host it. Any recommendations?

It uses laravel, NodeJS with

          Resell hosted email w/ WHMCS plugin | Summer 2017 offers 60% off   

Mailcheap (Cyberlabs s.r.o. CZ & Cyberlabs Inc. USA) is an enterprise email solutions provider. Summer offers below starting from $10/yr for 20 GB storage.

What's new?

  • Resell hosted email with WHMCS plugin. Custom branding available for $39.99/yr (no setup fee).
  • New servers, improved performance and choice of two webmails.



  • Unlimited Users
  • Unlimited Domains
  • 10 GB to 50 TB Storage
  • ~100% deliverability: Carrier grade outbound filtering & Premium SMTP relays
  • Choice of webmails: SOGo, Afterlogic
  • WebMail, Contacts (CardDAV) & Calendar (CalDAV) w/ Exchange ActiveSync as standard [SOGo]: 50 MB att.
  • Sieve filtering, vacation auto responder, graphical stats
  • TLS for secure access & transmission
  • SPF, DKIM, DMARC support
  • Free setup & migration (1 file)
  • IMAP, SMTP, POP3, EAS support
  • Web based admin panel
  • Anti-spam & anti-virus protection (4-step filtration) w/ quarantine management
  • DDoS protection
  • RAID protection & offsite data replication
  • Dual backup MX as standard
  • Fully Open source
  • Comprehensive documentation
  • Direct-Admin™ support (L3 only)

Summer 2017 offers

Basic 20G

Unlimited Domains | Unlimited Users | 20 GB Storage | Promo code: P1PCYS8D0T | Order

Annual price: $10 (recurring)

Basic 50G

Unlimited Domains | Unlimited Users | 50 GB Storage | Promo code: 3V0ZHRP2YZ | Order

Annual price: $25 (recurring)

Basic 100G

Unlimited Domains | Unlimited Users | 100 GB Storage | Promo code: AE6B4NK84B | Order

Annual price: $48 (recurring)

P10 (HA)

Unlimited Domains | Unlimited Users | 10 GB Storage | Promo code: 44X9WEQSR9 | Order

Annual price: $48 (recurring)

Please check the product page, FAQ & demo for more information.


Website | Enterprise Email | Demo | Features | Infrastructure | Comparison | Help/FAQ | Sign Up

          How to Monitor your CentOS 7 Server using Cacti   

HowToForge: Cacti is a free and open source network graphing solution.

          How to Install Joomla with Apache on Debian 9 (Stretch)   

HowToForge: Joomla is one of the most popular and widely supported open source content management system (CMS) platform in the world.

          Comment on Links 6/30/17 by Blennylips   
<blockquote>I’ve never met an MIT professor who wasn’t part PT Barnum</blockquote> I met and was hired by one. He had that streak true, but it was a wee bit. Late 90s as part of a Woods Hole SBIR (small business inovation research) grant, he needed a Window's user app to computerize the WH gizmo. I used 20% of my time working on Guido's open source baby Python (<a href="" rel="nofollow">python 1.5.2 win32 burnham</a>) and then could crank any old app you needed tout suite, eventually taking me to Harrodsburg Kentucky, Gumi Korea, Shizuoka Japan, not to mention Chepstow Wales.
          Open Source Friday @github #OpenSourceFriday   
Open Source Friday … invest a few hours contributing to the software you use and love.. Open source is made by people just like you. This Friday, invest a few hours contributing to the software you use and love. Contributing to open source happens at all levels, across projects and design, documentation, operations and code. You […]
          Raspberry Pi as an E-Book Server @Raspberry_Pi #PiDay #RaspberryPi   
via eBooks are a great way for teachers, librarians, and others to share books, classroom materials, or other documents with students—provided you have ready and reliable access to broadband. But even if you have low or no connectivity, there’s an easy solution: Create an eBook server with the open source Calibre eBook management software […]
          The @Adafruit Metro m0 express is open-source hardware certified @oshwassociation   
Adafruit-Metro-M0-Express has now officially been certified as open source hardware by the Open Source Hardware Association. Congratulations! Your UID is: US000067 Your hardware is now listed in the certification directly located here: Please take a moment to verify that the information is correct. Going forward, you can use your UID in conjunction with the certification […]
           CHOReVOLUTION Project to Facilitate Cross-Organization Service Integration, Cédric Thomas, OW2 CEO at Cloud Expo Europe 2017   

From Code to Product, the CHOReVOLUTION Studio is addressing scalable IoT/IoS applications based on choreography modelling, synthesis, adaptation, service bus, security, and cloud. It aims at integrating as much as possible existing web services to create new innovative solutions. The first applications being developed are in the Intelligent Transportation Systems and Smart Tourism domains, with benefits in terms of time-to-market, agility, dynamism and cross-organization coordination. The CHOReVOLUTION software are published under an open source licence and made publicly available through the OW2 community.
          CHOReVOLUTION Enactment Engine Demo at POSS 2016   

This demonstration was showcased at Paris Open Source Summit 2016 by Gianluca Ripa, Senior Account Manager at Cefriel.
          CHOReVOLUTION Studio POSS Demo   

This presentation was provided at POSS 2016, Paris Open Source Summit, by Amleto Di Salle, Researcher from L’Aquila University.
          Tracking the DNA of Your Android Code   
Tracking the DNA of Your Android Code

In my previous article, we explored a process that developer teams can follow to manage the source code for software-related product development, using the ...

The post Tracking the DNA of Your Android Code appeared first on Open Source For You.

          Cut and Play with Pitivi Video Editor   
Cut and Play with Pitivi Video Editor

In this article, I talk about editing videos (one of my hobbies) using one of the many open source video editors available — Pitivi. ...

The post Cut and Play with Pitivi Video Editor appeared first on Open Source For You.

          Sementi e Diritti   

Ogni linguaggio ha la sua grammatica e il libro considera l'agricoltura industriale e quella tradizionale come due modi diversi di esprimersi e occuparsi di produzione, natura, mercato, ricerca. L'agricoltura industriale utilizza prevalentemente i modi singolari, risultato di una impostazione riduzionista che tenta di forzare il mondo in poche regole e pochi standard.

L'agricoltura tradizionale invece, predilige i modi plurali e la multifunzionalità, la diversificazione.

Il principale spartiacque tra i due linguaggi è il modo in cui si rapportano alle sementi, elemento indispensabile per entrambi, ma oggi regolato secondo la visione industriale, che mal si adatta alle esigenze dell'agricoltura tradizionale. Si rendono necessarie nuove soluzioni giuridiche, che mettano in discussione innanzitutto lo strumento del brevetto sul vivente e della proprietà intellettuale su scoperte e invenzioni.

Prendendo a modello la rivoluzione avvenuta in ambito informatico con l'avvento del software open source e in ambito creativo con le nuove tecnologie che hanno consentito la condivisione delle creazioni in modalità wiki, gli autori suggeriscono di ripensare l'impianto giuridico relativo alle sementi sulla base di queste acquisizioni culturali.

          Mozilla Firefox, Portable Edition 54.0.1 (web browser) Released is proud to announce the release of Mozilla Firefox®, Portable Edition 54.0.1. It's the Mozilla Firefox browser bundled with a launcher as a portable app, so you can take your browser, bookmarks, settings and extensions on the go. And it's open source and completely free. Firefox Portable is a dual-mode 32-bit and 64-bit app, ensuring Firefox runs as fast as possible on every PC. It's available for immediate download and bundled on the World's Best Flash Drive: The Carbide as well as the fast and affordable Companion.

Mozilla®, Firefox® and the Firefox logo are registered trademarks of the Mozilla Foundation and used under license.

Update automatically or install from the portable app store in the Platform.

          KeePassXC Portable 2.2.0 Rev 2 (secure password manager) Released   

KeePassXC Portable 2.2.0 Rev 2 has been released. KeePassXC is a full-featured password manager packaged as a portable app, so you can take your email, internet, banking and other passwords with you. This release improves path portablization. It's packaged in Format for easy use from any portable device and integration with the Platform. And it's open source and completely free.

Update automatically or install from the portable app store in the Platform.

          E-voting: should we use it?   
There are multiple methods of voting and, for many years, a paper-based method has been preferred (in the UK, the United States and many other countries). However, in recent years there has been movements towards electronic methods. Several people have pointed out flaws, but there are also some great benefits. So, should we change to e-voting and what version of it should we use?

Governmental stances
The Institute for Public Policy Research published a background paper called E-voting: Policy and Practice and it revealed that the UK government has plans to implement an e-voting system as a way of increasing voter turnout. In the government paper In the Service of Democracy, there were four things listed that could help to achieve their goal:
  • Online electoral register
  • Online registration and online applications for postal votes
  • Online and text voting
  • Electronic counting and collating of election results
The United States have had e-voting systems for a number of years. In March 2002, California approved the Voting Modernization Bond Act, which allowed the purchase of modern electronic voting systems to replace their existing punch-card method.

The following shows the state's committment to this form of voting:
"In December 2003 California Secretary of State Kevin Shelley released My Vote Counts: California's Plan for Voting in the 21st Century, which outlines California's plan for complying with the Help America Vote Act (HAVA). The state expects to receive over $100 million in HAVA funds. In November 2003 the Secretary of State issued a position paper on the deployment of touch-screen voting systems in California."
The Electoral Reform Society disapprove of most of the current state of e-voting in their policy document that can be found here. However, one thing they do approve of is electronic counting of paper ballots. They feel it speeds up the whole process and if it failed, you could always do a manual count as there is a paper-baseed element to it. The IPPR document mentioned earlier also details the benefits of e-counting and goes on to say that "In India the electronic system allowed the results to be announced a matter of hours after the polls closed".

I'm glad that there is approval for electronic counting and I can understand why some people would want a paper backup. However, there really is no need for paper providing the technology is implemented properly. For example, you could have a voting machine using RAID 1, which means that if the primary disk fails, you still have the information on the second disk and you could even remove it and do the counting on another system. If you have to use paper ballots, you could always do multiple electronic counts (possibly on more than one machine) to ensure accuracy. That would reduce the amount of staff/volunteers required and therefor reduce costs.

Machine voting
The following is from the Electoral Reform Society's policy:
"To minimise the risk of fraud, voting machines should produce voter verifiable audit trails. Rather than the voter completing a ballot paper, the machine should produce a ballot paper which the voter verifies and then puts in a ballot box. Should there be a dispute over the result, the paper ballots should be regarded as the definitive votes rather than those recorded on the machines.

Additionally, there should be safeguards equivalent to those described for e-counting."
I get the impression that they would be happy happier if machines weren't used as their suggestion still goes through the same amount of paper as a non-electronic system, therefore reducing the machine to 'an extra hurdle', which could potentially slow things down.
"Following the March 2004 primary election, the performance of Diebold touch-screen systems used in some California counties came under increased scrutiny. In public hearings conducted by the Secretary of State's Voting Systems and Procedures Panel, it was confirmed not only that uncertified versions of Diebold software had been used in some counties, but that some of the software had been inadequately tested and had performed poorly, resulting in lost and miscast votes"
If you read the quote above, you can see why some people would stop trusting machine voting. However, that situation wasn't totally the fault of the machines. It was the counties at fault for not implementing approved systems.

Remote voting
I can understand why the ERS don't approve of this as networks can be hacked and if you have unsupervised locations, there's the possibility of coercion. Despite this, you could still have polling stations with electronic voting machines until the security for remote voting has been suitably improved.

In all the articles and research about e-voting, the biggest problem is security (especially in the case of remote voting). The IPPR document states that
  • ID cards and/or passwords could be stolen
  • If passwords are to be used, they would need to be short so they can be remembered, but that makes them more vulnerable
  • Biometrics could be used, but there would be a huge cost (the UK government estimates £31bn)
  • Viruses, firewall holes and network hacking
  • Voting programs are made by commercial sources. In the US there were calls to make the code 'open source' to ensure transparency, but doing that would mean voting systems could be hacked more easily
The last two problems could instantly be solved by not having remote voting until security has improved. You could just have unnetworked voting terminals and put together the totals at the end of the voting period. With biometrics, there is a long-term benefit, so the high cost might be worth it. Biometric cards would definitely be better than standard ID cards.

So, how would you improve security so that remote voting could be trusted? Well, you could use strong encryption on the database where the votes are kept. You could also use SQL stored procedures for website logins. This has been proved to protect against things such as SQL injection. There's also RAID, mirrored servers and making sure the server is in a physically secure location. Some would say that encryption can be weak, but there are also extremely strong varieties.

Paper-based systems
Dr. Rebecca Mercuri is a noted expert in this field and was involved with the decision to have a hand recount of votes in Florida in the 2000 US Presidential election. She strongly opposes any 100% electronic method (so she'll probably not be happy with the fact that 23 US states don't require paper records of votes). In this article, she mentions the problems in California. What Dr. Mercuri fails to realise is that is was at least partly the fault individual counties for not using approved versions of the Diebold voting system. She also doesn't consider the fact that a lot of security problems are caused by the machines being networked (they don't have to be). E-voting speeds up the counting process and can help people with disabilities, so there is benefits.

Disabled people
According to the IPPR background paper, privacy is increased for disabled people (this is because they can use the same systems instead of going to a separate location). The height of the machines could also be increased or decreased for those with back problems (or for people in wheelchairs). You could also have audio versions of the ballot for those who are blind. E-voting can therefore make democracy more inclusive.

In Britain there were several trials (15 in total) and the most notable ones were in Swindon and Sheffield. In both cases the voter turnout increased. In Swindon, 61% of voters in a survey felt that e-voting was better and 94% stated that they would use e-voting again in a general election. In 2002 (the Swindon trial), turnout was as high as 31.2%. This may seem low, but it's still a significant increase compared to previous years (for further details of the trials, see the E-voting: Policy and Practice document).

Usage in the student movement
Many student unions across the country have recently started to use e-voting and most seem to include remote voting in their implementation because it means people don't necessarily have to go to the campus just to vote (they may not necessarily have lectures/seminars/labs on that day). At Hull University Union, the first year of e-voting had had 1718 voters, which was a 25% increase on the 06/07 total. There has been a lot of controversy with remote voting though. The University of Essex's student union had to change the result of their presidential election because there was electoral misconduct and an unusually large number of votes coming from certain IP addresses. This could have meant that people were taking others over to a particular machine and influencing the way they vote. Coercion might have happened, but cancelling all the votes from those IP addresses could mean that some perfectly legitimate votes were made useless. They should have got the usernames and investigated those people instead.

There are (currently) a number of security issues with e-voting and many of those are linked to remote voting. This is unfortunate because remote voting allows greater flexibility. However, there are ways to improve security. E-counting and machine voting definitely have benefits and there is no reason why they cannot be used straight away (providing approved systems are implemented).

So, what do you think?

Technorati tags: E-voting, Technology
          Images and applying licences   
There are many situations where you wouldn't want your property stolen or misused. Fortunately, there are many ways to protect your work (if you want it to be protected). In the software industry there are a vast array of open source licences and the most popular has to be the GNU General Public License. Microsoft has it's own, proprietary, End User Licence Agreements (EULAs). An example of one of these is the licences is here. Microsoft also has it's Windows Genuine Advantage (WGA) and the music industry has a history of using Digital Rights Management (however, DRM is no longer used by some major music companies. An example of this would be Warner Bros.).

You also have a number of choices when dealing with images. First of all, there's the standard Copyright, where all rights are normally reserved. This means that the only way to do anything with the image would be to ask the permission of the owner (or pay for it if there's some form of revenue model in place). This can be considered very restrictive, but it does mean that nothing will happen if you don't want it to. If someone does try to break that licence, you could ask them for a share of the profit, get them to stop using the images, or in some cases you could take them to court.

When dealing with images, I apply licences, but they aren't forms of Copyright. I use licences created by Creative Commons. These are less restrictive, but they still have rules that you must follow. For example, their Attribution licence means:
"You let others copy, distribute, display, and perform your copyrighted work — and derivative works based upon it — but only if they give credit the way you request."
Another example is the Attribution-Non Commercial-Non Derivative Works licence, which means you can use the image if you give credit to the author, but it can't be for the purposes of earning money and you can't change it in any way.

What do I use?
  • Photos - for all my photos I use Creative Commons Attribution. I like to have credit if my pictures are used for anything and I don't mind if it's commercial or not-for-profit. I don't mind people changing the pictures, simply because I'm not the best photographer and someone else could dramatically improve what I've done. However, if my standard improves, I might change my licence.
  • Photo-edits - Recently, I've been doing a lot of work in Adobe Photoshop. I take photos and play around with the program's various features to see what works. If I produce anything that I think is decent or better, I apply an Attribution-Non Derivative Works licence. This is because (as I've already said) I like getting credit for my work and if (either myself or someone else) can earn money from what I do, that's great. However, I have started to put a lot more time into making these photo-edits good, so I wouldn't want people altering them.
Other aspects of my policy
If I ever make a photo-edit using someone else's photo, I always ask permission first. They might not have the same licensing system as me, so if I took something they did without asking, I would be stealing. Also, if someone didn't want me to take a photo of them, I would never question their decision or forceably take a picture. If I did the opposite of that it would be unfair and could be considered an invasion of personal space. I would also remove a photo or photo-edit from the internet if the subject asked (they might not like what I've done).

One thing I've started to do recently is change the licence type on an image if someone made a request. For example, if I uploaded a photo, they might not want profit made from the work. I wouldn't question the reasons and in that case I would change the standard Attribution licence to Attribution-Non Commercial.

Enforcing Creative Commons
There is one notable case where breaking one of these licences initiated a court case. Adam Curry, a former MTV presenter who is currently the Founder and President of Podshow (a successful podcasting network) frequently posts photos to the online photo galler Flickr (a website that I use). The licence that he uses is Attribution-Non Commercial-Share Alike. However, a dutch gossip magazine used four of these photos for commercial purposes (i.e. selling their magazine) without asking permission. When Adam Curry found out about this, he took the magazine to court and won. This is now used as a test case.

Not using a licence
There are many people who don't apply licences to at least some of their work. This is fine if they don't mind people doing anything with what they've created. However, it might be difficult to e.g. win a court case if your image was used in a way that you didn't like. The person who did something with the image would not be under any obligation to remove it from wherever it was placed/posted.

So, licensing can be important. You must also remember that their are plenty of choices out there, so it's likely that you'll be able to find something to suit your needs. I have a licensing policy, but as you can can see I allow some flexibility.

So, what do you think?

Technorati tags: Photography, Photoshop, Images, Licensing
          FrOSCon 2013, or, why is there no MirBSD exhibit?   

FrOSCon is approaching, and all MirBSD developers will attend… but why’s there no MirBSD exhibit? The answer to that is a bit complex. First let’s state that of course we will participate in the event as well as the Open Source world. We’ll also be geocaching around the campus with other interested (mostly OSS) people (including those we won for this sport) and helping out other OSS projects we’ve become attached to.

MirOS BSD, the operating system, is a niche system. The conference on the other hand got “younger” and more mainstream. This means that almost all conference visitors do not belong to the target group of MirOS BSD which somewhat is an “ancient solution”: the most classical BSD around (NetBSD® loses because they have rc.d and PAM and lack sendmail(8), sorry guys, your attempt at being not reformable doesn’t count) and running on restricted hardware (such as my 486SLC with 12 MiB RAM) and exots (SPARCstation). It’s viable even as developer workstation (if your hardware is supported… otherwise just virtualise it) but its strength lies with SPARC support and “embedded x86”. And being run as virtual machine: we’re reportedly more stable and more performant than OpenBSD. MirBSD is not cut off from modern development and occasionally takes a questionable but justified choice (such as using 16-bit Unicode internally) or a weird-looking but beneficial one (such as OPTU encoding saving us locale(1) hassles) or even acts as technological pioneer (64-bit time_t on ILP32 platforms) or, at least, is faster than OpenBSD (newer GNU toolchain, things like that), but usually more conservatively, and yes, this is by design, not by lack of manpower, most of the time.

The MirPorts Framework, while technically superiour in enough places, is something that just cannot happen without manpower. I (tg@) am still using it exclusively, continuing to update ports I use and occasionally creating new ones (mupdf is in the works!), but it’s not something I’d recommend someone (other than an Mac OSX user) to use on a nōn-MirBSD system (Interix is not exactly thriving either, and the Interix support was only begun; other OSes are not widely tested).

The MirBSD Korn Shell is probably the one thing I will be remembered for. But I have absolutely no idea how one would present it on a booth at such an exhibition. A talk is much more likely. So no on that front too.

jupp, the editor which sucks less, is probably something that does deserve mainstream interest (especially considering Natureshadow is using it while teaching computing to kids) but probably more in a workshop setting. And booth space is precious enough in the FH so I think that’d be unfair.

All the other subprojects and side projects Benny and I have, such as mirₘᵢₙcⒺ, josef stalin, FreeWRT, Lunix Ewe, Shellsnippets, the fonts, etc. are interesting but share few, if any, common ground. Again, this does not match the vast majority of visitors. While we probably should push a number of these more, but a booth isn’t “it” here, either.

MirOS Linux (“MirLinux”) and MirOS Windows are, despite otherwise-saying rumours called W*k*p*d*a, only premature ideas that will not really be worked on (though MirLinux concepts are found in mirₘᵢₙcⒺ and stalin).

As you can see, despite all developers having full-time dayjobs, The MirOS Project is far from being obsolete. We hope that our website visitors understand our reasons to not have an exhibition booth of our own (even if the SPARCstation makes for a way cool one, it’s too heavy to lift all the time), and would like to point out that there are several other booths (commercial ones, as well as OSS ones such as AllBSD, Debian and (talking to) others) and other itineries we participate in. This year both Benny and I have been roped into helping out the conference itself, too (not exactly unvoluntarily though).

The best way to talk to us is IRC during regular European “geek” hours (i.e. until way too late into the night – which Americans should benefit from), semi-synchronously, or mailing lists. We sort of expect you to not be afraid to RTFM and look up acronyms you don’t understand; The MirOS Project is not unfriendly but definitely not suited for your proverbial Aunt Tilly, newbies, “desktop” users, and people who aren’t at least somewhat capable of using written English (this is by design).


Michael Langguth and Scalaris AG asked me to publish the mksh/Win32 Beta 14 source and binary archive, and it is with joy I’m doing this.

Checksums and Hashes

  • RMD160 (ports/ = 0dc8ef6e95592bd132f701ca77c4e0a3afe46f24
  • TIGER (ports/ = 966e548f9e9c1d5b137ae3ec48e60db4a57c9a0ed15720fb
  • 1181543005 517402 /MirOS/dist/mir/mksh/ports/
  • MD5 (ports/ = b57367b0710bf76a972b493562e2b6b5

Just a few words on it (more in the README.1st file included): this is a port of The MirBSD Korn Shell R39 to the native WinAPI; it’s not quite got the full Unix feel (especially as it targets the Weihenstephan unxutils instead of a full Interix or Cygwin environment) but doesn’t need a full POSIX emulation layer either. It’s intended to replace MKS ksh and the MKS Toolkit. Source for the compatibility library is also included under The MirOS Licence; we aim at publishing it as OSI Certified Open Source Software like mksh itself. (There is a situation with dlmalloc/nedmalloc being resolved, and the icon is derived from the BSD dæmon which is a protected unregistered trademark, but we’re not Mozilla and allow distro packages to keep using it ☺) Rebasing it on a newer mksh(1) followed by (partial) integration into the main source code is a goal.

Have fun trying it out and hacking on it. It’s currently built with -DMKSH_NOPROSPECTOFWORK (so coprocesses and a few other minor things won’t work), but a SIGCHLD emulation is being worked on – but if you want to help out, I’m sure it’s welcome, just come on IRC or post on the mailing list, and I’ll forward things to Michael as needed. Reports on testing with other toolchain and OS versions are also welcome.

          11 Mar 2006   

"This project aims to connect open source projects concerning an IDE core library and framework written in Python to avoid the ongoing duplicating efforts."

Internet police is a generic term for police and secret police departments and other organizations in charge of policing Internet in a number of countries. The major purposes of Internet police, depending on the state, are fighting cybercrime, as well as censorship, propaganda, and monitoring and manipulating the online public opinion.

It has been reported that in 2005, departments of provincial and municipal governments in mainland China began creating teams of Internet commentators from propaganda and police departments and offering them classes in Marxism, propaganda techniques, and the Internet. They are reported to guide discussion on public bulletin boards away from politically sensitive topics by posting opinions anonymously or under false names. "They are actually hiring staff to curse online", said Liu Di, a Chinese student who was arrested for posting her comments in blogs.
Chinese Internet police also erase anti-Communist comments and posts pro-government messages. Chinese Communist Party leader Hu Jintao has declared the party's intent to strengthen administration of the online environment and maintain the initiative in online opinion.

The Computer Emergency Response Team of Estonia (CERT Estonia), established in 2006, is an organisation responsible for the management of security incidents in .ee computer networks. Its task is to assist Estonian Internet users in the implementation of preventive measures in order to reduce possible damage from security incidents and to help them in responding to security threats. CERT Estonia deals with security incidents that occur in Estonian networks, are started there, or have been notified of by citizens or institutions either in Estonia or abroad.
Cyber Crime Investigation Cell is a wing of Mumbai Police, India, to deal with Cyber crimes, and to enforce provisions of India's Information Technology Law, namely, Information Technology Act 2000, and various cyber crime related provisions of criminal laws, including the Indian Penal Code. Cyber Crime Investigation Cell is a part of Crime Branch, Criminal Investigation Department of the Mumbai Police.
Andhra Pradesh Cyber Crime Investigation Cell is a wing of Hyderabad Police, India, to deal with Cyber crimes.

Dutch police were reported to have set up an Internet Brigade to fight cybercrime. It will be allowed to infiltrate Internet newsgroups and discussion forums for intelligence gathering, to make pseudo-purchase and to provide services
After the 2006 coup in Thailand, the Thai police has been active in monitoring and silencing dissidents online. Censorship of the Internet is carried out by the Ministry of Information and Communications Technology of Thailand and the Royal Thai Police, in collaboration with the Communications Authority of Thailand and the Telecommunication Authority of Thailand.

The Internet Watch Foundation (IWF) is the only recognised organisation in the United Kingdom operating an Internet ‘Hotline’ for the public and IT professionals to report their exposure to potentially illegal content online. It works in partnership with the police, Government, the public, Internet service providers and the wider online industry.

The Internet Crime Complaint Center, also known as IC3, is a multi-agency task force made up by the Federal Bureau of Investigation (FBI), the National White Collar Crime Center (NW3C), and the Bureau of Justice Assistance (BJA).

IC3's purpose is to serve as a central hub to receive, develop, and refer criminal complaints regarding the rapidly expanding occurrences of cyber-crime. The IC3 gives the victims of cybercrime a convenient and easy-to-use reporting mechanism that alerts authorities of suspected criminal or civil violations on the internet. IC3 develops leads and notifies law enforcement and regulatory agencies at the federal, state, local and international level, IC3 act as a central referral mechanism for complaints involving Internet related crimes.

Criminal threatening is the crime of intentionally or knowingly putting another person in fear of imminent bodily injury.

There is no legal definition in English law as to what constitutes criminal threatening behaviour, so it is up to the courts to decide on a case by case basis. However, if somebody threatens violence against somebody, then this may be a criminal offence. In most countries it is only an offence if it can be proven the person had the intention and equipment to carry out the threat. However if the threat involves the mention of a bomb it is automatically a crime.
In most U.S. jurisdictions, the crime remains a misdemeanor unless a deadly weapon is involved or actual violence is committed, in which case it is usually considered a felony.

Criminal threatening can be the result of verbal threats of violence, physical conduct (such as hand gestures or raised fists), actual physical contact, or even simply the placing of an object or graffiti on the property of another person with the purpose of coercing or terrorizing.

Criminal threatening is also defined by arson, vandalism, the delivery of noxious biological or chemical substances (or any substance that appears to be a toxic substance), or any other crime against the property of another person with the purpose of coercing or terrorizing any person in reckless disregard for causing fear, terror or inconvenience.

"Terrorizing" generally means to cause alarm, fright, or dread in another person or inducing apprehension of violence from a hostile or threatening event, person or object.

Crimint is a database run by the Metropolitan Police Service of Greater London which stores information on criminals, suspected criminals and protestors. It was created in 1994 and supplied by Memex Technology Limited. It supports the recording and searching of items of intelligence by both police officers and back office staff. As of 2005 it contained seven million information reports and 250,000 intelligence records. The database makes it much easier for police officers to find information on people, as one officer who used the system stated in 1996:

"With Crimint we are in a new world. I was recently asked if I knew something about a certain car. In the old days I would have had to hunt through my cards. I would probably have said, 'Yes, I do, but . . . '. With Crimint I was able to answer the question in about fifteen seconds. And with Crimint things just don't go missing.
People are able to request their information from the database under data protection laws. Requests have shown that the database holds large amounts of information on protesters who have not committed any crimes. Information is stored for at least seven years. Holding information on people who have never committed any offence may be against people's human rights. A police officer, Amerdeep Johal, used the database to contact sex offenders and threatened to disclose information about them from the database unless they paid him thousands of pounds.

Along with development of the Internet, state authorities in many parts of the world are moving forward to install mass surveillance of the electronic communications, establish Internet censorship to limit the flow of information, and persecute individuals and groups who express “inconvenient” political views in the Internet. Many cyber-dissidents have found themselves persecuted for attempts to bypass state controlled news media. Reporters Without Borders has released a Handbook For Bloggers and Cyber-Dissidents and maintains a roster of currently imprisoned cyber-dissidents

Chinese Communist Party leader Hu Jintao ordered to "maintain the initiative in opinion on the Internet and raise the level of guidance online, An internet police force - reportedly numbering 30,000 - trawls websites and chat rooms, erasing anti-Communist comments and posting pro-government messages." However, the number of Internet police personnel was challenged by Chinese authorities Amnesty International blamed several companies, including Google, Microsoft and Yahoo!, of collusion with the Chinese authorities to restrict access to information over the Internet and identify cyber-dissidents by hiring "big mamas" .
It was reported that departments of provincial and municipal governments in mainland China began creating "teams of internet commentators, whose job is to guide discussion on public bulletin boards away from politically sensitive topics by posting opinions anonymously or under false names" in 2005 Applicants for the job were drawn mostly from the propaganda and police departments. Successful candidates have been offered classes in Marxism, propaganda techniques, and the Internet. "They are actually hiring staff to curse online," said Liu Di, a Chinese student who was arrested for posting her comments in blogs

Internet censorship is control or suppression of the publishing or accessing of information on the Internet. The legal issues are similar to offline censorship.
One difference is that national borders are more permeable online: residents of a country that bans certain information can find it on websites hosted outside the country. A government can try to prevent its citizens from viewing these even if it has no control over the websites themselves. Filtering can be based on a blacklist or be dynamic. In the case of a blacklist, that list is usually not published. The list may be produced manually or automatically.

Barring total control over Internet-connected computers, such as in North Korea, total censorship of information on the Internet is very difficult (or impossible) to achieve due to the underlying distributed technology of the Internet. Pseudonymity and data havens (such as Freenet) allow unconditional free speech, as the technology guarantees that material cannot be removed and the author of any information is impossible to link to a physical identity or organization.

In some cases, Internet censorship may involve deceit. In such cases the censoring authority may block content while leading the public to believe that censorship has not been applied. This may be done by having the ISP provide a fake "Not Found" error message upon the request of an Internet page that is actually found but blocked (see 404 error for details).

In November 2007, "Father of the Internet" Vint Cerf stated that he sees Government-led control of the Internet failing due to private ownership. Many internet experts use the term "splinternet" to describe some of the effects of national firewalls.

Some commonly used methods for censoring content are:

IP blocking. Access to a certain IP address is denied. If the target Web site is hosted in a shared hosting server, all websites on the same server will be blocked. This affects IP-based protocols such as HTTP, FTP and POP. A typical circumvention method is to find proxies that have access to the target websites, but proxies may be jammed or blocked, and some Web sites, such as Wikipedia (when editing), also block proxies. Some large websites like Google have allocated additional IP addresses to circumvent the block, but later the block was extended to cover the new IPs.

DNS filtering and redirection. Don't resolve domain names, or return incorrect IP addresses. This affects all IP-based protocols such as HTTP, FTP and POP. A typical circumvention method is to find a domain name server that resolves domain names correctly, but domain name servers are subject to blockage as well, especially IP blocking. Another workaround is to bypass DNS if the IP address is obtainable from other sources and is not blocked. Examples are modifying the Hosts file or typing the IP address instead of the domain name in a Web browser.

Uniform Resource Locator (URL) filtering. Scan the requested URL string for target keywords regardless of the domain name specified in the URL. This affects the HTTP protocol. Typical circumvention methods are to use escaped characters in the URL, or to use encrypted protocols such as VPN and TLS/SSL.

Packet filtering. Terminate TCP packet transmissions when a certain number of controversial keywords are detected. This affects all TCP-based protocols such as HTTP, FTP and POP, but Search engine results pages are more likely to be censored. Typical circumvention methods are to use encrypted connections - such as VPN and TLS/SSL - to escape the HTML content, or by reducing the TCP/IP stack's MTU/MSS to reduce the amount of text contained in a given packet.

Connection reset. If a previous TCP connection is blocked by the filter, future connection attempts from both sides will also be blocked for up to 30 minutes. Depending on the location of the block, other users or websites may also be blocked if the communication is routed to the location of the block. A circumvention method is to ignore the reset packet sent by the firewall.

Reverse surveillance. Computers accessing certain websites including Google are automatically exposed to reverse scanning from the ISP in an apparent attempt to extract further information from the "offending" system.

One of the most popular filtering software programmes is SmartFilter, owned by Secure Computing in California, which has recently been bought by McAfee. SmartFilter has been used by Tunisia, Saudi Arabia and Sudan, as well as in the US and the UK.

There are a number of resources that allow users to bypass the technical aspects of Internet censorship. Each solution has differing ease of use, speed, and security from other options. Most, however, rely on gaining access to an internet connection that is not subject to filtering, often in a different jurisdiction not subject to the same censorship laws. This is an inherent problem in internet censorship in that so long as there is one publicly accessible system in the world without censorship, it will still be possible to have access to censored material.

Proxy websites are often the simplest and fastest way to access banned websites in censored nations. Such websites work by being themselves un-banned but capable of displaying banned material within them. This is usually accomplished by entering a URL address which the proxy website will fetch and display. They recommend using the https protocol since it is encrypted and harder to block.

Java Anon Proxy is primarily a strong, free and open source anonymizer software available for all operating systems. As of 2004, it also includes a blocking resistance functionality that allows users to circumvent the blocking of the underlying anonymity service AN.ON by accessing it via other users of the software (forwarding client).

The addresses of JAP users that provide a forwarding server can be retrieved by getting contact to AN.ON's InfoService network, either automatically or, if this network is blocked, too, by writing an e-mail to one of these InfoServices. The JAP software automatically decrypts the answer after the user completes a CAPTCHA. The developers are currently planning to integrate additional and even stronger blocking resistance functions.

Using Virtual Private Networks, a user who experiences internet censorship can create a secure connection to a more permissive country, and browse the internet as if they were situated in that country. Some services are offered for a monthly fee, others are ad-supported.

Psiphon software allows users in nations with censored Internet such as China to access banned websites like Wikipedia. The service requires that the software be installed on a computer with uncensored access to the Internet so that the computer can act as a proxy for users in censored environments

In 1996, the United States enacted the Communications Decency Act, which severely restricted online speech that could potentially be seen by a minor – which, it was argued, was most of online speech. Free speech advocates, however, managed to have most of the act overturned by the courts. The Digital Millennium Copyright Act criminalizes the discussion and dissemination of technology that could be used to circumvent copyright protection mechanisms, and makes it easier to act against alleged copyright infringement on the Internet. Many school districts in the United States frequently censor material deemed inappropriate for the school setting. In 2000, the U.S. Congress passed the Children's Internet Protection Act (CIPA) which requires schools and public libraries receiving federal funding to install internet filters or blocking software.[104] Congress is also considering legislation to require schools, some businesses and libraries to block access to social networking websites, The Deleting Online Predators Act. Opponents of Internet censorship argue that the free speech provisions of the First Amendment bars the government from any law or regulation that censors the Internet.

A 4 January 2007 restraining order issued by U.S. District Court Judge Jack B. Weinstein forbade a large number of activists in the psychiatric survivors movement from posting links on their websites to ostensibly leaked documents which purportedly show that Eli Lilly and Company intentionally withheld information as to the lethal side-effects of Zyprexa. The Electronic Frontier Foundation appealed this as prior restraint on the right to link to and post documents, saying that citizen-journalists should have the same First Amendment rights as major media outlets. It was later held that the judgment was unenforcable, though First Amendment claims were rejected.

In January 2010, a lawsuit was filed against an online forum,, by a Maldives diving charter company (see scubaboard lawsuit). The owner of the company claimed $10 million in damages caused by users of scubaboard,, and the owner of Individual forum members were named in the lawsuit as "employees" of the forum, despite their identity being anonymous except for their IP address to the moderators and owners of This lawsuit demonstrates the vulnerability of internet websites and internet forums to local and regional lawsuits for libel and damages.
          Comment on Open Source Flying by mosleybond5257   
I’d seen the Gmail-Hosted some weeks ago.nGreat [another] job by JraNil! Come on <a href='' rel="nofollow"></a>
          Financial Data Science Event In London Includes Hedge Fund Quantitative Analysts, Software Engineers   
International Business Times and Newsweek host their first Data Science in Capital Markets event this week (1st and 2nd March) at the Barbican in the City of London. A global audience of data scientists, quantitative analysts, and software engineers from hedge funds and investment banks are attending. Speakers include Wes McKinney, the open source Pandas libraries guru; Professor David Hand, chief scientific advisor at Winton; and Professor Steve Roberts director of the Oxford-Man Institute.
           O que são SSL, SSH, HTTPS?   

SSL - Secure Socket Layer

Para que serve o SSL?

É um sistema que permite a troca de informações entre dois computadores, de modo seguro. SSL fornece 3 coisas:

Privacidade: Impossível espionar as informações trocadas.
Integridade: Impossível falsificar as informações trocadas.
Autenticação: Ele garante a identidade do programa, da pessoa ou empresa com a qual nos comunicamos.
SSL é um complemento ao TCP / IP e pode (potencialmente) proteger qualquer protocolo ou programa usando TCP/IP.

O SSL foi criado e desenvolvido pela empresa Netscape e RSA Segurança.

Agora há versões de código aberto (Open Source), assim como um protocolo livre semelhante: TLS (ver mais abaixo).

Por que usar o SSL em vez de outro sistema?

Por que usar o OpenSSL?

SSL é padronizado.
Existe uma versão livre do SSL: OpenSSL que você pode usar nos seus programas, sem pagar direitos autorais.
OpenSSL é Open Source: todo mundo pode controlar e verificar o código-fonte (O segredo fica nas chaves de criptografia, não no algoritmo em si).
SSL foi criptoanalizado: este sistema foi mais analisado do que todos os seus concorrentes. O SSL foi revisado por inúmeros especialistas em criptografia. Portanto, pode ser considerado seguro.
Ele é muito conhecido: é fácil criar programas que interagirão com outros programas usando o SSL.

Muito cuidadocom os sistemas proprietários: ao contrário do que se poderia pensar, a segurança de um sistema de criptografia não se baseia no segredo do algoritmo de criptografia, mas no segredo da chave. Devemos confiar apenas em sistemas que foram publicados e analisados.

Como funciona o SSL?

SSL é composto de dois protocolos:
SSL Handshake protocol: antes de comunicar, os dois programas SSL negociam chaves e protocolos de criptografia em comum.
SSL Record protocol: Uma vez negociados, eles criptografam todas as informações trocadas e realizam vários testes.

O aperto de mão SSL ("handshake")
No início da comunicação o cliente e o servidor trocam:
A versão SSL com a qual eles querem trabalhar,
A lista de métodos de criptografia (simétrico e assimétrico) e de assinatura que todo mundo conhece (com comprimentos de chaves),
Métodos de compressão que todo mundo conhece
Números aleatórios,

Cliente e servidor tentam usar o melhor protocolo de criptografia e diminuem até encontrar um protocolo comum para ambos. Depois de feito isso, eles podem começar a troca de dados.

A comunicação SSL ("record")
Com o SSL, o remetente de dados:
Corta os dados em pacotes,
Comprime os dados,
assina criptograficamente os dados,
Criptografa os dados,
Envia os dados.

Aquele que recebe os dados:
Desencriptografa os dados,
Verifica a assinatura dos dados,
Descompacta os dados,
Remonta os pacotes de dados.

Como o SSL protege as comunicações?

O SSL usa:
um sistema de criptografia assimétrico (como o RSA ou o Diffie-Hellman). Saiba mais aqui: ele é utilizado para criar a "master key" (chave principal), que criará chaves de sessão.
um sistema de criptografia simétrico (DES, 3DES, IDEA, RC4...) usando as chaves de sessão para criptografar os dados.
um sistema de assinatura criptográfico das mensagens (HMAC, utilizando o MD5, SHA...) para ter certeza de que as mensagens não estão corrompidas.

É durante o "handshake" SSL que o cliente e o servidor escolhem os sistemas comuns (criptografia assimétrica, simétrica, assinatura e comprimento de chave).

No seu navegador, você pode ver a lista dos sistemas utilizados, colocando o cursor sobre o cadeadinho, quando você está em uma página HTTPS.

Para que servem os certificados?
Durante uma negociação (handshake) SSL, verifique a identidade da pessoa com quem você está se comunicando. Como ter certeza de que o servidor com quem você está falando é quem ele diz ser?

É aí que entram os certificados. Na hora de se conectar a um servidor web seguro, este enviará um certificado com o nome da empresa, endereço, etc. É uma espécie de carteira de identidade.

Como verificar a autenticidade desta carteira de identidade?

São as PKI (Public Key Infrastructure), das empresas externas (às quais você faz, implicitamente, confiança), que vão verificar a autenticidade do certificado.

(A lista destas PKI está incluída no seu navegador. Geralmente tem a VeriSign, a Thawte, etc.)

Estas PKI assinam criptograficamente os certificados das empresas (e elas são pagas por isso).


O SSL pode ser usado para proteger praticamente qualquer protocolo usando TCP / IP.

Alguns protocolos foram especialmente modificados para suportar o SSL:

HTTPS: é HTTP+SSL. Este protocolo está incluído em praticamente todos os navegadores, e permite que você (por exemplo) consulte suas contas bancárias na web de forma segura.
FTPS é uma extensão do FTP (File Transfer Protocol) usando SSL.

SSH (Secure Shell) é uma espécie de telnet (ou rlogin) seguro. Isso permite que você se conecte a um computador remoto com segurança e ter uma linha de comando. O SSH tem extensões para proteger outros protocolos (FTP, POP3, ou mesmo X Windows).

É possível tornar seguros os protocolos, criando túneis SSL. Depois de criar o túnel, você pode fazer passar qualquer protocolo por ele (SMTP, POP3, HTTP, NNTP, etc). Todos os dados trocados são criptografados automaticamente.

Isto pode ser feito com ferramentas como o STunnel ou o SSH .

Veja este exemplo com o protocolo POP3:

Com o protocolo POP3 que, normalmente, você usa para ler os seus e-mails, as senhas e as mensagens transitam claramente na Internet. Suas senhas e mensagens podem ser roubadas.

Com o túnel SSL, e sem alterar os softwares cliente e servidor , você pode garantir a recuperação de seus e-mails: ninguém pode roubar suas senhas ou e-mails, pois tudo que passa pelo túnel SSL é criptografado.

Mas isso requer a instalação do STunnel no cliente e no servidor.
Alguns provedores de acesso oferecem este serviço, mas ainda é muito raro. Pergunte ao seu provedor de acesso se ele tem este tipo serviço instalado.

O STunnel permite assim a proteção da maioria dos protocolos baseados no TCP/IP, sem alterar os softwares . Ele é muito fácil de instalar. 

Quais são as diferentes versões do SSL?

O SSL versão 3.0 é muito semelhante ao SSL versão 2.0 , mas o SSL v2.0 tem menos algoritmos de criptografia que o SSL V3.0.

TLS v1.0 é um protocolo semelhante com base no SSL. As aplicações usando o TLS v1.0 pode se comunicar facilmente com as aplicações utilizando o SSL v3.0.
Então, quando vejo o cadeado, estou protegido?

O cadeado te indica se as comunicações entre o seu browser e o site são seguras: ninguém pode espiá-los, e ninguém pode mexer nas comunicações. Mas ele não garante nada mais

Para tirar uma foto:

HTTPS (o cadeado), é como um carro blindado: ele garante a segurança do transporte.

Mas realmente só transporte.

O carro blindado não garante que o banco usa bons cofres e que eles fecham bem.

O carro blindado também não garante que o banco não desfalque ninguém.

O carro blindado realmente só garante o transporte.

É a mesma coisa para o HTTPS (o cadeadinho do browser).

Da mesma forma que criminosos podem contratar um carro blindado, piratas e bandidos podem muito bem criar um site seguro (com o cadeadinho).

Tenha cuidado e não confie em qualquer informação em qualquer site, com ou sem cadeado.

Artigo original publicado por sebsauvage

Tradução feita por Lucia Maurity y Nouira

          开源星期五:Github 将周五定位“开源贡献日”   

近日,Github 推出了新的开源计划 —— 开源星期五(Open Source Friday),顾名思义,就是鼓励个人和组织每周五抽出一部分时间为开源项目做贡献。

过去三年,GitHub 鼓励员工每月至少有一个周五能花时间开展开源工作。现在“开源星期五”已经发展为一个任何人都可以参与的计划。用户可以先从公司的开源项目开始贡献。

Github 高级工程师 Mike McQuaid 对这一新举措做出了解释,他认为通过鼓励员工贡献开源项目,一方面能帮助员工提升开发技能,另一方面也能推动公司改善业务基础设施,以使工作更顺利进行。


>>>【评论有礼】6月6日-30日评论每日更新的“新闻资讯和软件更新资讯”,评论点赞数超过 20 的可登上每周更新的“源资讯”和“软件周刊”两大栏目,点赞数超过 50 的还将获得 5 活跃积分奖励和开源中国定制好礼。详情

          LinkedIn Open Sources a Pair of Incident-Escalation Tools   
Social network said it has seen huge internal adoption of these tools, even from non-technical teams, such as sales.
          NEW INFO: A strange "Poker Venture" run out of Trump Tower   
First: A plea for help. I rarely run fundraisers on this site, but an emergency just hit. After spending my meager savings on frivolities like medicine and a new video card, MY BLOODY AIR CONDITIONER DIED. I live in a very hot attic in a very humid part of the country. When the outside temperature turns hellish, it becomes even hellisher up here -- for me, for my ladyfriend, and for my poor diabetic doggiefriend George.

Yes, I'm brash enough to mention the effects of global (or at least local) warming on my canine companion. He pants and pants but won't leave my side for the cooler climes of downstairs. The loyalty of a dog is touching, astounding, and a bit unnerving. (Would it tug at your heartstrings if I showed you his picture? Mine is the shamelessness born of desperation.)

If you "ding" the PayPal button to your left (you may have to scroll down), your generous contribution will go straight to the air conditioner fund. We don't need a big 'un. Our gratitude will be beyond words.

Before we get to our main investigative piece, we need to look at a couple of other stories...

Terror in the UK: Our sympathies and thoughts go out to the victims of the attack on the Finsbury Park Mosque, which has finally been officially labeled an act of terrorism.
Witnesses said he 'deliberately' drove onto the pavement outside north London's Muslim Welfare House - yards from the Finsbury Park Mosque - and jumped out of the cab shouting 'I'm going to kill all Muslims - I did my bit'.
A similar horror took place in Virginia:
A 17-year-old Muslim girl identified as Nabra was kidnapped and beaten to death early Sunday morning in Sterling, Virginia. She was reported as missing at roughly 4 a.m. and now police believe they have found her body in a pond.
So far, Donald Trump's twitter feed has mentioned neither of these outrages.

Roger Stone. The Roger Stone/Alex Jones team-up has been absolutely boggling. After building a formidable rep as a conspiratorial-mastermind-for-hire, Stone now pretends to be the victim of dark and evil forces. It's a surreal situation: Roger Stone is one of the original Watergaters and the king of the dirty tricksters, yet our modern paranoia addicts consider him an apostle of fair play and decency. What's next? Will the Infowarriors proclaim Pablo Escobar to be the saint of non-violence?

Stone's name came up an NBC News story published yesterday: "NBC News Exclusive: Memo Shows Watergate Prosecutors Had Evidence Nixon White House Plotted Violence." In 1972, Nixonians planned to use bullyboys from YAF (Young Americans for Freedom, a notorious right-wing group of the time) to mount a violent physical attack against Daniel Ellsberg as he spoke -- along with William Kunstler and other notables -- at an anti-war rally on the Capitol steps. The Watergate Committee investigated the incident and outlined their findings in a memo that has remained unreleased until now.

Roger Stone was also interviewed. Here's a tidbit that everyone seems to have missed...

"Carl Rove"? Is that Turdblossom back when he was a young turd? Must be! Stone now seems to despise Rove, calling him a "political profiteer" -- unlike Stone himself, who always does what he does for the purest of motives, just like Jesus or Barry Allen. Also see here.

Ivanka, Donald and their "Poker Venture." Just after I had announced to the world that I was so over Louise Mensch, she publishes a truly fascinating bit of research which relies on open-source material instead of nameless informants. Okay, okay: The Nameless Ones do pop up in a couple of paragraphs. Readers of her piece should mentally excise those bits and double-check the rest.
Ivanka has been linked to eleven companies in the Trump financial disclosures. Her status has been put to “Inactive” on several odd holding companies...
The most immediately interesting company of Ivanka Trump’s is “Poker Venture Managing Member Corp“.  This is owned by Donald and Ivanka Trump. Ivanka’s company with her father itself is an officer of this very dodgy-looking shell, “Poker Venture LLC.” Judging by the corporation wiki, there is panic in Team Ivanka and Team Trump over “Poker Venture“.  It shows zero “Key People”, and has two other almost identical companies as its officers – the live, active PVMMC that Ivanka co-owns with her pops, and this “Inactive” attempt to clean Ivanka out of the picture: by: Poker Venture Managing Member Corp by: Donald J. Trump.

Those touring “Corporation Wiki” will be surprised to see that “Poker Venture Managing Member Corp by: Donald J Trump” lists itself as an officer of inactive “Poker Venture”, yet when one clicks on the gray icon, one is taken to the same active company.

All very strange.
I'll say! Beyond the fact that Trump allegedly divested himself of his business interests, isn't it a little unseemly for the President of the United States to be listed as the owner of a company called Poker Venture Managing Member Corp, which filed in Nevada?

This company is related to another enterprise called simply Poker Ventures, whose listed address is 725 5th Avenue, New York, NY -- Trump Tower. Mensch seems to have missed that part, although she thinks that this "Poker" business somehow links up to the botnet which she believes is run out of Trump Tower. (I see no evidence for this beyond the inscrutable pronouncements of The Nameless Ones.)

I'll tell you something else that Louise Mensch seems to have missed: This Poker Venture business appears to link up to some scandalous doings outlined in one of my previous posts (of which I happen to be quite proud). It's hard to summarize that complicated piece, but I'll try.

A Russian "Godfather" named Alimzhan Tokhtakhounov ran a shady operation out of Trump Tower -- specifically, unit 63A, not far below Trump's own living quarters. It was so shady that the FBI had bugged the joint. (We're talking money laundering.)

Tokhtakhounov -- known as "Little Taiwan" or "Taiwanchik" because he looks Asian -- is the guy who linked Donald Trump up with the world of beauty contests in Russia. Taiwanchik has his fingers in all sorts of interesting deals -- for example, he was once arrested for rigging an Olympic figure skating competition.

Tokhtakhounov had partners in his New York enterprise -- Vadim Trincher and Anatoly Golubchik. (Trincher was the 2009 world poker champion.) They were tried and convicted. Guess who put 'em away? Preet Bharara.

That's right: The U.S. attorney famously fired by Donald Trump secured convictions against two guys running a criminal enterprise right below Trump's feet in Trump Tower.
Dirty money must needs be laundered, right? One great way to launder money is via the world of art. Banks won't ask too many questions if you tell 'em that someone just paid twenty million for a Picasso.

Enter Helly Nahmad, who used to run a tony art gallery in Manhattan. His family is worth some $3 billion...
From a 2013 story in the NYT:
Mr. Nahmad, a night-life fixture known for his showy extravagance and celebrity crowd — a $21 million Trump Tower apartment and friendships with people like Gisele Bündchen and Leonardo DiCaprio — was charged in April in a racketeering indictment brought by federal prosecutors in Manhattan. He was accused of being part financier, part money launderer and part bookmaker in a network that organized poker games and sports betting operations and drew hundred-thousand-dollar wagers from celebrities and billionaires.
The feds knew his secrets because they were listening in on Nahmad's cellphone chats.
But Helly’s interest in gambling led to trouble. The high-stakes poker and sports-betting ring that he is accused of helping to lead — with activity stretching from New York and Los Angeles — ultimately came to the attention of federal authorities who were investigating Russian organized crime figures.

Mr. Nahmad helped not only to bankroll the operation, according to prosecutors, but was also personally involved in taking sports bets. In all, 34 people were indicted in the case. The lead defendant is Alimzhan Tokhtakhounov, whom authorities identify as a high-ranking Russian gangster known by his nickname, Taiwanchik.
All of this has to do with the world of high-stakes poker. These people linked up with a coast-to-coast gambling operation which attracted a number of Hollywood celebrities, including Ben Affleck and Tobey Maguire.

My original post has many more details -- and by "many" I mean MANY. (Check out the Cyprus connection, which takes in Nahmad, Taiwanchik and Trump himself.) But right now, I want you to focus on "the holy game of poker."

1. Donald and Ivanka run something called "Poker Venture," headquartered in Trump Tower but incorporated in Nevada.

2. Directly below Trump's living quarters was a crooked enterprise run by Russian crime lord Alimzhan Tokhtakhounov, whose links to Trump himself are beyond dispute. Tokhtakhounov got away; he is now in Russia.

3. Helly Nahmad, who also had a Trump Tower address, was involved with a nationwide (actually international) high-stakes poker ring.

4. Nahmad and Tokhtakhounov deny knowing each other, even though Preet Bahrara named them both as co-defendants when he made a case against this money laundering/gambling operation. They also both link up with Trincher and the other defendants.

It may be as well to quote from the above-cited 2013 US Attorney's Office press release:
The Taiwanchik-Trincher Organization is a nationwide criminal enterprise with strong ties to Russia and Ukraine. The leadership of the organization ran an international sportsbook that catered primarily to Russian oligarchs living in Russia and Ukraine and throughout the world. The Taiwanchik-Trincher Organization laundered tens of millions of dollars in proceeds from the gambling operation from Russia and the Ukraine through shell companies and bank accounts in Cyprus, and from Cyprus into the U.S. Once the money arrived in the U.S, it was either laundered through additional shell companies or invested in seemingly legitimate investments, such as hedge funds or real estate.
Speaking of which: Many people have wondered who helped Jared Kushner purchase that ridiculously overpriced skyscraper at 666 Fifth Avenue. (I'm not claiming to have proof of a connection. I'm just sayin'.) For that matter, quite a few people have asked wondered why anyone would invest in Donald Trump's various properties, given the rather odd way he does business.

Let's get back to that press release:
The Nahmad-Trincher Organization is a nationwide criminal enterprise with leadership in Los Angeles, California, and New York City. The organization ran a high-stakes illegal gambling business that catered primarily to multi-millionaire and billionaire clients. The organization utilized several online gambling websites that operated illegally in the U.S. Debts owed to the Nahmad-Trincher Organization sometimes reached hundreds of thousands of dollars and even millions.
NYPD Commissioner Raymond W. Kelly said: “The subjects in this case ran high-stakes illegal poker games and online gambling, proceeds from which are alleged to have been funneled to organized crime overseas. The one thing they didn't bet on was the New York City police and federal investigators’ attention. I commend the NYPD Organized Crime Investigations Division and their partners in the FBI and U.S. Attorney Bharara's office for identifying and bringing the members of this organization to justice.”
Well, we know what Trump did to Bharara. No good deed goes unpunished.

The question before us is this: Is the "Poker Ventures" that lists Donald and Ivanka as owners -- and which lists Trump Tower as its address -- part of the very real "poker venture" run by criminals living right below Donald's feet in that very same building?

I can't prove it. But the nomenclature sure as hell makes the idea seem inescapable.

Nomenclature isn't all we have to go on. Let's return to Louise Mensch's article (stressing, once again, that this piece -- unlike much of her recent work -- derives from open sources, all properly cited)...
Equally odd is that the state of New Jersey – (Ivanka Trump has a New Jersey address listed as one of her business records, associated with Poker Ventures) – has added to its newly published list of “Internet Gaming Ancillary Companies” both Poker Ventures LLC, which was already listed, but also “Novacorp Net Ltd”, “VidMob Inc” and “Reblaze Technologies”.
So: Poker Ventures has to do with online gambling. (The legality of online gaming is a matter of some dispute.) Remember: The crooked Nahmad/Trincher operation also involved online gambling.

And Poker Ventures LLC does indeed appear on that list compiled by the state of New Jersey. See for yourself.

Mensch goes on to connect Poker Ventures up with some other notable names on that list, shady concerns which have definite connections to both Russians and Israelis. One of these enterprises,  Reblaze Technologies, seems to have little to do with gambling and much to do with hacking: publishes anti-NSA blogs such as these, lauding the ‘hacking tools’ leaked by Shadow Brokers. Reblaze also offers lists of “protect your website” services you can buy from Russian hackers [sic], listing, ostensibly to protect against them, the full range of tools employed on Russia’s hack of America; its founder repeated the anti-NSA blog in an article that reads as a threat to hack America on Medium in December 2016.
Fascinating stuff. That "protect your website" scam reminds me of the hoary "watch your car" racket illustrated in those old Dead End Kid movies. You should hit those links; they take you into very odd places.

Unfortunately, we don't yet have any proof (beyond the word of Mensch's Nameless Ones) that this Reblaze business is tied up with Trump's Poker Ventures. Pity that: The possibilities are very intriguing.

For that matter, I must reiterate that I cannot prove that Donald and Ivanka's weird foray into the worlds of poker and online gaming is part-and-parcel of the poker and online gaming operation run by Helly Mahmad and his Russian gangster associates. But come on: It's hard not to conclude that we're dealing with two ingredients from the same stew-pot. These poker-related ventures form a Venn diagram in which the two circles seem nearly congruent. You can't fairly accuse me of leaping to wild conclusions: This ain't the kind of hazy guff you get from Alex Jones.

Louise Mensch, if you're reading these words: Thanks for returning to the world of real investigative writing. In the future, I hope you stop relying on the private sources who have provided you with so many dubious scoops. You'll have much more impact if you continue to provide stories that can be verified.

I strongly urge you to look into the possible links between "Poker Ventures" and the real-world poker venture in Trump Tower.

And please: Next time you feel tempted to accuse a perceived adversary of being a Russian spy, bite your tongue until it bleeds. A little more caution in your rhetoric will help you in the long run.

Finally: If these words have proven intriguing or enlightening to you, please consider dinging that PayPal account. It's already infernally muggy in here -- several degrees hotter than the temps outside. I feel like I'm melting.
          Comment on WordPress Marketing Team Launches Case Studies and Usage Survey for Agencies, Clients, and Enterprises by Alec   
Hi Steve, Thanks for your feedback about caching. There have been long periods where WP Super Cache hasn't been maintained, W3TC was broken or WP Rocket (paid) was missing key features. Satollo's Hyper Cache version upgrade from v2 to v3 more or less entirely broke the plugin (I included a link but it's not accepted - try searching for "WordPress Caching Drag Race" for the details). This is all anything but a smooth experience for the end user. When I mentioned caching in passing, I didn't mean that WordPress core should write caching for all contexts and all levels but just that there should be effective PHP caching which would run efficiently on shared hosting and smaller VPS out of the box. The core caching could be written in such a way that it can be enhanced or overridden by developers running a Varnish static file cache fed by Nginx microcaching and with a custom implementation of Memcached. Some developers like that don't even use a caching plugin. All the caching is done in open source server level tools. Using that tiny minority (which includes,, WP Engine and Flywheel for instance) to fail to provide a rock solid core caching system is a kind of tedious sophistry. "Move to a better host" when it means the host is charging at least $50/month for less than 100,000 visits is just another example of how we are crippling WordPress deliberately in order to be able to vacuum more money out of users pockets in upgraded hosting, upgrades plugins and frivolous maintenance contracts. These external bolt on solutions from hosts aren't necessarily even all that good. For instance, while highly efficient both the WPE and Flywheel caching tools are incredibly fragile and work only for sites with mainly non-logged in visitors. For membership sites or ecommerce usually you have to roll your own. The basics like caching should work out of the box. There's more than enough talent in with thousands of volunteers and incredibly talented programmers to have caching - for instance - solved in two weeks. Why must millions of end users bearing the cost for manually tinkering and experimenting with half-baked or poorly maintained caching solutions?
          QA for App, DW, Mobile - 70%Auto - Financial Services -   
Multiple openings
Relocation and H1 transfer are considered (must have 36+ month validation)
No remote/telecommute FTE: $PAY = $90-100K depending on previous earning history and interview.
Hiring Process: Oral quiz over phone; On-line Technical test; phone interview; on-site interview; hiring decision

Salary: Ideally $90K, Max. $100K

Position: 70% Automation/30% Manual

CLIENT is creating a QA Scrum team with individuals experienced testing in the following areas: Data Warehousing, Mobile, & App. Development. These QAEs will be deployed to various development teams as needed. Hence, the need for individuals testing different things.

Manager Preferences! = Agile, Automation, & ability to create test plans.Job Responsibilities:
?Provides guidance and subject matter expertise to engineers on testing and Quality Assurance (QA) methodologies and processes
?Works with engineers to drive improvements in code quality via manual and automated testing
?Responsible for managing the definition, implementation, and integration of quality principles into the design and development of software and IT processes
?Involved in the review of requirements specifications for weaknesses in function, performance, reliability, scalability, testability, usability, and security and compliance testing, and provides recommendations
?Plans and defines testing approach, providing advice on prioritization of testing activity in support of identified risks in project schedules or test scenarios
?Develops test plans, testing resource requirements, and overall scheduling of testing activity
?Responsible for developing manual and automated test cases and configurations needed to meet testing of business requirements
?Executes test cases/scripts to ensure delivery of quality software applications
?Monitors and tracks resolution of defects, coordinating with engineers in order to prevent, report, and resolve
?Designs, monitors, and analyzes quality assurance metrics such as defect, defect counts, test results, and test status
?Identifies opportunities to adopt innovative technologies
?This ?rebel with a cause? looks beyond the obvious for continuous improvement opportunities

Required Skills/Qualifications:
? 3+ years of experience in IT, with an emphasis on QA, and proven ability in writing test cases, running functional, automated, or performance tests, and managing defects
? Experience with Agile, other rapid application development methods, and Waterfall SDLC ? Solid experience in test-driven development, unit testing, functional testing, system integration testing, regression testing, GUI testing, web service testing, and browser compatibility testing
? Experience in working with testing automation tools like JMeter, HP Load Runner, HP Quality Test Professional or, HP Quality Center, open source tools Selenium (Selenium IDE, Selenium RC, Selenium Web Driver), JUnit, Eclipse, and preparation of automation test framework
? Strong written and verbal communication skills
? Ability to effectively interpret technical and business objectives and challenges
? Ability to think abstractly and deal with ambiguous/under-defined problems
? Demonstrated willingness to learn new technologies and takes pride in how fast they develop working software

Educational requirement:
? Bachelor's or master's degree in computer science, computer engineering, or other technical discipline, or equivalent work experience, is preferred

Preferred Additional:
? Ability to enable business capabilities through innovation is a plus
? Experience with coding skills across a variety of platforms (JAVA, HTML5, DB2, XML, and Mainframe Cobol) is a plus
? Knowledge of web security and encryption technology is a plus
? Any of the following test certifications - QAI, ASQ, IIST, ISEB, ISTQB - are a plus
? Experience with payments technology and industry is a plus
? Call center experience a plus We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Software Quality Assurance Engineer -   
Job Responsibilities:
? Provides guidance and subject matter expertise to engineers on testing and Quality Assurance (QA) methodologies and processes
? Works with engineers to drive improvements in code quality via manual and automated testing
? Responsible for managing the definition, implementation, and integration of quality principles into the design and development of software and IT processes
? Involved in the review of requirements specifications for weaknesses in function, performance, reliability, scalability, testability, usability, and security and compliance testing, and provides recommendations
? Plans and defines testing approach, providing advice on prioritization of testing activity in support of identified risks in project schedules or test scenarios
? Develops test plans, testing resource requirements, and overall scheduling of testing activity
? Responsible for developing manual and automated test cases and configurations needed to meet testing of business requirements
? Executes test cases/scripts to ensure delivery of quality software applications
? Monitors and tracks resolution of defects, coordinating with engineers in order to prevent, report, and resolve
? Designs, monitors, and analyzes quality assurance metrics such as defect, defect counts, test results, and test status
? Identifies opportunities to adopt innovative technologies
? This ?rebel with a cause? looks beyond the obvious for continuous improvement opportunities

Required Skills/Qualifications:
? 3+ years of experience in IT, with an emphasis on QA, and proven ability in writing test cases, running functional, automated, or performance tests, and managing defects
? Experience with Agile, other rapid application development methods, and Waterfall SDLC ? Solid experience in test-driven development, unit testing, functional testing, system integration testing, regression testing, GUI testing, web service testing, and browser compatibility testing
? Experience in working with testing automation tools like JMeter, HP Load Runner, HP Quality Test Professional or, HP Quality Center, open source tools Selenium (Selenium IDE, Selenium RC, Selenium Web Driver), JUnit, Eclipse, and preparation of automation test framework
? Strong written and verbal communication skills
? Ability to effectively interpret technical and business objectives and challenges
? Ability to think abstractly and deal with ambiguous/under-defined problems
? Demonstrated willingness to learn new technologies and takes pride in how fast they develop working software

Educational requirement:
? Bachelor's or master's degree in computer science, computer engineering, or other technical discipline, or equivalent work experience, is preferred

Preferred Additional:
? Ability to enable business capabilities through innovation is a plus
? Experience with coding skills across a variety of platforms (JAVA, HTML5, DB2, XML, and Mainframe Cobol) is a plus
? Knowledge of web security and encryption technology is a plus
? Any of the following test certifications - QAI, ASQ, IIST, ISEB, ISTQB - are a plus
? Experience with payments technology and industry is a plus
? Call center experience a plus
We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
Quality Assurance Engineer Software ? Phoenix, AZ, 85005

Salary: $95K

Position: 70% Automation/30% Manual


Client is creating a QA Scrum team with individuals experienced testing in: Data Warehousing, Mobile, & App. Development.

Manager Preferences! = Agile, Automation & ability to create test plans.

Job Responsibilities:

? Provides guidance and subject matter expertise to engineers on testing and Quality Assurance (QA) methodologies and processes
? Works with engineers to drive improvements in code quality via manual and automated testing
? Plans and defines testing approach, providing advice on prioritization of testing activity in support of identified risks in project schedules or test scenarios
? Develops test plans, testing resource requirements, and overall scheduling of testing activity
? Responsible for developing manual and automated test cases and configurations needed to meet testing of business requirements
? Executes test cases/scripts to ensure delivery of quality software applications
? Monitors and tracks resolution of defects, coordinating with engineers in order to prevent, report, and resolve
? Designs, monitors, and analyzes quality assurance metrics such as defect, defect counts, test results, and test status

Required Skills/Qualifications:

? 3+ years of experience in IT, with an emphasis on QA, and proven ability in writing test cases, running functional, automated, or performance tests, and managing defects
? Experience with Agile, other rapid application development methods, and Waterfall SDLC ? Solid experience in test-driven development, unit testing, functional testing, system integration testing, regression testing, GUI testing, web service testing, and browser compatibility testing
? Experience in working with testing automation tools like JMeter, HP Load Runner, HP Quality Test Professional or, HP Quality Center, open source tools Selenium (Selenium IDE, Selenium RC, Selenium Web Driver), JUnit, Eclipse, and preparation of automation test framework

Educational requirement:

? Bachelor's or master's degree in computer science, computer engineering, or other technical discipline

Preferred Additional:

? Experience with coding skills across a variety of platforms (JAVA, HTML5, DB2, XML, and Mainframe Cobol) is a plus
? Knowledge of web security and encryption technology is a plus
? Any of the following test certifications - QAI, ASQ, IIST, ISEB, ISTQB - are a plus

We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
If hired, our client will reimburse for relocation and will transfer a visa if necessary (if 36 months or more are left on the term).

Software Quality Assurance _ Automation Engineer ? Phoenix, AZ
Position: 70% Automation/30% Manual
Scope of Team:
Our global client is creating a QA Scrum team with individuals experienced testing in the following areas: Data Warehousing, Mobile, & App. Development. These QAEs will be deployed to various development teams as needed. Hence, the need for individuals testing different things.

Required Skills/Qualifications:
?3+ years of experience in IT, with an emphasis on QA, and proven ability in writing test cases, running functional, automated, or performance tests, and managing defects
?Experience with Agile, other rapid application development methods, and Waterfall SDLC ? Solid experience in test-driven development, unit testing, functional testing, system integration testing, regression testing, GUI testing, web service testing, and browser compatibility testing
?Experience in working with testing automation tools like JMeter, HP Load Runner, HP Quality Test Professional or, HP Quality Center, open source tools Selenium (Selenium IDE, Selenium RC, Selenium Web Driver), JUnit, Eclipse, and preparation of automation test framework
?Strong written and verbal communication skills
?Ability to effectively interpret technical and business objectives and challenges
?Ability to think abstractly and deal with ambiguous/under-defined problems
?Demonstrated willingness to learn new technologies and takes pride in how fast they develop working software

Educational requirement:
?Bachelor's or Master's degree in computer science, computer engineering, or other technical discipline, or equivalent work experience, is preferred

Preferred Additional:
?Ability to enable business capabilities through innovation is a plus
?Experience with coding skills across a variety of platforms (JAVA, HTML5, DB2, XML, and Mainframe Cobol) is a plus
?Knowledge of web security and encryption technology is a plus
?Any of the following test certifications - QAI, ASQ, IIST, ISEB, ISTQB - are a plus
?Experience with payments technology and industry is a plus
?Call center experience a plus

We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Software Development Engineer   
Design, develop, and document test frameworks using Java, C, or C++
Provide technical leadership and direction to testing team members to adherence to coding, quality, functionality, performance, scalability and on-time delivery standards.
Lead, mentor and motivate team members to maximize their potential, foster innovation, boost productivity, and to deliver high quality software.
Work with a cross-functional team of hardware and software engineers to develop innovative automated testing solutions
Assist with measuring software quality and be able to present tradeoffs and provide risk assessment to all stakeholders.
Participate in defect triage meetings and provide defect reports to project team
Represent QA during project requirements and architectural reviews
Author test plans, test cases, and test reports
Serve as point of contact for day-to-day automation activities and resource allocation
Working closely with development teams, product managers, and peers to root cause, debug, and resolve issues
Perform code-reviews, coach and mentor team members to follow best practices and procedures

Minimum Qualifications:
Bachelor's degree in engineering, computer science or related field; advanced degree desirable.
4+ years of experience leading software testing teams.
5+ years of software development experience, testing mobile, web, and or enterprise apps, platforms, or systems
Outstanding programming skills in Java, C, or C++
Advanced experience with client side technologies such as JavaScript, CSS3, HTML5, AJAX, XML, JSON, REST, DOM and others.
Excellent experience and knowledge in leading the testing lifecycle of large scale mobile platform or enterprise software products.
Experience with Agile development methodologies.
Proven experience in testing/leading of mobile SDK/API?s, or enterprise software platforms.
Excellent communication, organizational and analytical skills.
Experience with tools such as JIRA, Selenium, Load Runner etc.

Preferred Qualifications:
Proficient in Python, Perl, and shell scripting
Ability to programmatically test the product, measure test coverage, drive testability and diagnostic ability into the product, while promoting best practices in quality areas
Experience testing the kernel, kernel subsystems, and user space applications
Experience with open source test tools
Experience with Make files and Ant build scripts
API automation testing including working experience with unit test automation frameworks
Familiarity with the Eclipse IDE, GitHub, and Android SDK
Ability to triage issues, react well to changes, work with teams and ability to multi-task on multiple products and projects
Excellent communication, collaboration, reporting, analytical and problem solving skills
Comfortable working in short release cycles covering (2-4 weeks)
Experience working with and configuring continuous integration systems (e.g. Jenkins)
Experience with Selenium WebDriver, Robotium, Appium, or other automation frameworks
Experience writing code to test the Linux operating system, specifically, an in-depth understanding of the real time kernel, power management, scheduler, memory management, inter-process communication, and driver model

Experience developing mobile test apps (Android, iOS, etc)
Experience or familiarity with Android CTS test suite We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Software Development Engineer   
Design, develop, and document test frameworks using Java, C, or C++
Provide technical leadership and direction to testing team members to adherence to coding, quality, functionality, performance, scalability and on-time delivery standards.
Lead, mentor and motivate team members to maximize their potential, foster innovation, boost productivity, and to deliver high quality software.
Work with a cross-functional team of hardware and software engineers to develop innovative automated testing solutions
Assist with measuring software quality and be able to present tradeoffs and provide risk assessment to all stakeholders.
Participate in defect triage meetings and provide defect reports to project team
Represent QA during project requirements and architectural reviews
Author test plans, test cases, and test reports
Serve as point of contact for day-to-day automation activities and resource allocation
Working closely with development teams, product managers, and peers to root cause, debug, and resolve issues
Perform code-reviews, coach and mentor team members to follow best practices and procedures

Minimum Qualifications:
Bachelor's degree in engineering, computer science or related field; advanced degree desirable.
4+ years of experience leading software testing teams.
5+ years of software development experience, testing mobile, web, and or enterprise apps, platforms, or systems
Outstanding programming skills in Java, C, or C++
Advanced experience with client side technologies such as JavaScript, CSS3, HTML5, AJAX, XML, JSON, REST, DOM and others.
Excellent experience and knowledge in leading the testing lifecycle of large scale mobile platform or enterprise software products.
Experience with Agile development methodologies.
Proven experience in testing/leading of mobile SDK/API?s, or enterprise software platforms.
Excellent communication, organizational and analytical skills.
Experience with tools such as JIRA, Selenium, Load Runner etc.

Preferred Qualifications:
Proficient in Python, Perl, and shell scripting
Ability to programmatically test the product, measure test coverage, drive testability and diagnostic ability into the product, while promoting best practices in quality areas
Experience testing the kernel, kernel subsystems, and user space applications
Experience with open source test tools
Experience with Make files and Ant build scripts
API automation testing including working experience with unit test automation frameworks
Familiarity with the Eclipse IDE, GitHub, and Android SDK
Ability to triage issues, react well to changes, work with teams and ability to multi-task on multiple products and projects
Excellent communication, collaboration, reporting, analytical and problem solving skills
Comfortable working in short release cycles covering (2-4 weeks)
Experience working with and configuring continuous integration systems (e.g. Jenkins)
Experience with Selenium WebDriver, Robotium, Appium, or other automation frameworks
Experience writing code to test the Linux operating system, specifically, an in-depth understanding of the real time kernel, power management, scheduler, memory management, inter-process communication, and driver model

Experience developing mobile test apps (Android, iOS, etc)
Experience or familiarity with Android CTS test suite We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Senior UI Engineer   
<span>**Please contact me at 415 228 4275 if you have any questions about the opportunity**<br>Modis&rsquo;s client is looking for a Senior UI Engineer. This could be a contract to hire or fulltime opportunity. Client has locations in San Rafael and San Jose.<br>Roles and Responsibilities: This is a unique opportunity to be a key player in an organization that is at the forefront of growing field of Big Data Analytics. You will be responsible for helping to architect, design, implement, and test key user interfaces for our analytic tools in the Analytic Cloud. You will also be engaged at the early stages and have the ability to develop and shape a new architecture. You will also be exposed to and integrate with number of 3rd party, and open source tools and technologies that will provide you with a unique opportunity to grow you skills and have fun while working on a dynamic and close knit team.<br>Working Conditions: You will work with diverse team of very talented developers who take pride in their work, and value their customers. They are driven to see their products succeed in the marketplace and work as a team to accomplish their goals.<br>Required Experience: &nbsp;&nbsp;&nbsp;<br>Education and or Certifications: Bachelors Degree or higher in Computer Science or related discipline.<br>Experience and Qualifications: Experienced Software Engineer with 5+ years of strong professional development experience in Web development with HTML5, CSS, JavaScript, and general Web 2.0 technologies<br>Technical Skills and Abilities: &bull; Experience with JavaScript-based libraries such as ExtJS, JQuery, Bootstrap, Knockout, AngularJS, Backbone.js, YUI, and D3.js<br>&bull; Strong JavaScript and Java problem solving, debugging, and performance tuning skills<br>&bull; Good knowledge of object oriented analysis &amp; design<br>&bull; Strong instincts and background in creating simple, clean and powerful user interfaces.<br>&bull; Experience with Platform as a service (PAAS) environments and API&rsquo;s such as OpenShift and Cloud Foundry a big plus<br>&bull; Experience with web services (SOAP/REST), and SOA is a plus<br>&bull; Background in using statistical analysis/modeling tools such as SAS, SPSS, R, etc. is desirable<br>&bull; Background in using Business Intelligence and Data Mining tools desirable<br>&bull; Background in scripting languages such as Groovy, Python, Perl, or Ruby is a plus<br>&bull; Excellent oral and written communication skills.&bull; Capability to provide technical leadership to the team&bull; Experience with agile development processes and tools desirable<br>&nbsp;<br></span>
If hired, our client will reimburse for relocation and will transfer a visa if necessary (if 36 months or more are left on the term).

Software Quality Assurance _ Automation Engineer ? Phoenix, AZ
Position: 70% Automation/30% Manual
Scope of Team:
Our global client is creating a QA Scrum team with individuals experienced testing in the following areas: Data Warehousing, Mobile, & App. Development. These QAEs will be deployed to various development teams as needed. Hence, the need for individuals testing different things.

Required Skills/Qualifications:
?3+ years of experience in IT, with an emphasis on QA, and proven ability in writing test cases, running functional, automated, or performance tests, and managing defects
?Experience with Agile, other rapid application development methods, and Waterfall SDLC ? Solid experience in test-driven development, unit testing, functional testing, system integration testing, regression testing, GUI testing, web service testing, and browser compatibility testing
?Experience in working with testing automation tools like JMeter, HP Load Runner, HP Quality Test Professional or, HP Quality Center, open source tools Selenium (Selenium IDE, Selenium RC, Selenium Web Driver), JUnit, Eclipse, and preparation of automation test framework
?Strong written and verbal communication skills
?Ability to effectively interpret technical and business objectives and challenges
?Ability to think abstractly and deal with ambiguous/under-defined problems
?Demonstrated willingness to learn new technologies and takes pride in how fast they develop working software

Educational requirement:
?Bachelor's or Master's degree in computer science, computer engineering, or other technical discipline, or equivalent work experience, is preferred

Preferred Additional:
?Ability to enable business capabilities through innovation is a plus
?Experience with coding skills across a variety of platforms (JAVA, HTML5, DB2, XML, and Mainframe Cobol) is a plus
?Knowledge of web security and encryption technology is a plus
?Any of the following test certifications - QAI, ASQ, IIST, ISEB, ISTQB - are a plus
?Experience with payments technology and industry is a plus
?Call center experience a plus

We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Android Developer   
<span>Our client needs a Senior Android Software Engineer to help them deliver on their brand promise: &quot;Eliminate bad user experiences from the world.&quot; They&#39;re a fast-growing start-up based in Mountain View. They have been in the business of helping website managers answer one question: &quot;WHY are people leaving my website without buying?&quot; They do that by recording &#39;think-aloud&#39; video of people using websites, so you can quickly see when and why they get stuck and frustrated. They&#39;ve rapidly become part of the website improvement process for many companies, such as Google, Amazon, Facebook, Walmart, LinkedIn, Apple, and Twitter. &nbsp;They&#39;re now expanding into the mobile space and want to help product teams answer key usability questions about their mobile websites and apps.<br>&nbsp;<br>This is a really exciting opportunity for the developer who wants to be an inaugural engineer on our client&rsquo;s mobile engineering team. They need someone who will work with their existing team of developers to augment their current capabilities and help build the future.<br>&nbsp;<br><B>Specific Responsibilities:</B><br><ul>
<li>Collaborate with our product team to help design and implement compelling features for mobile devices</li><li>Collaborate with the platform engineering team to help define the REST APIs used in the implementation of our mobile apps</li><li>Drive the establishment of industry recognized best practices for Android development in our mixed platform environment</li><li>Build apps and tools that facilitate usability testing on mobile devices and are easy to use by our panel of testers as well as our client developers</li></ul>
<li>Ability to complete a project end-to-end, from architecting to implementation and maintenance</li><li>Ability to learn new technologies quickly</li><li>Proven to be a team player, a self-starter, driven to achieve great results and constantly improve</li><li>Worked as part of an Agile development team</li><li>Excellent analytical, debugging and problem solving skills</li><li>Crisp written and verbal communication skills</li><li>Experience with Android, Java and API Design</li><li>Ability to work on multiple product initiatives at once</li><li>7+ years experience in software development</li><li>2+ years experience developing on Android</li><li>A B.S. in Computer Science or a related field, or equivalent experience</li></ul>
&nbsp;<br><B>Bonus Points for:</B><br><ul>
<li>Android App(s) in Play Store</li><li>iOS knowledge and experience</li><li>Cloud connected mobile apps using REST architectures</li><li>Contributions to developer-focused products or a broadly deployed Open Source projects</li><li>Experience with HTML5/Javascript</li></ul>
&nbsp;<br>If this is YOU, apply now for immediate consideration!<br></span>
          Embedded Hardware Designer   
Outstanding company in Central California is looking for a talented developer to help build their new Ethernet and wireless communication products. Are you a fast and reliable worker? Do you like working in casual and fun environment? Do you want to build products that will be deployed all over the world? Then this might be the spot for you in Bakersfield, CA headquarters.
Key Responsibilities:
? Hardware design, prototyping and debugging
? Work with Firmware engineers to bring up new platforms
? Capable of quickly learning new platforms
? Adaptable and able to work independently
At least 5 years of industry experience in the following:
? CPU board design using ARM processor architectures
? Component selection
? Schematic capture
? PCB layout
? General digital design
? USB & Ethernet interface design
? BSP development (Linux kernel)
? Experience with a broad range of processors
? Strong communication skills
? Ability to manage projects from inception to completion
Useful Skills:
? Rockwell or Schneider PLC experience
? 802.11 and Cellular experience
? Industrial Automation
? FPGA/VHDL development

Company specializes in the development of communication solutions compatible with the large automation suppliers' controllers such as Rockwell Automation? and Schneider Electric?. The primary focus is to provide connectivity solutions that link dissimilar automation products. Company provides field proven connectivity and communication solutions that bridge between various automation products as seamlessly as if they were all from the same supplier.

The company offers a very competitive salary and benefits package in an area where cost of living is extremely affordable.

Bakersfield is a better choice for living, working, growing, and playing, as it is affordable, accessible, and extremely welcoming. It is home to California State University, Bakersfield, Bakersfield College, a UC Merced campus and extensive adult education facilities. Bakersfield is also linked to the UCLA medical school though its six area hospitals. Besides the educational opportunities, Bakersfield offers several natural attractions. It is within two hours of Pacific Ocean beaches, mountains, and the Giant Sequoia National monument. Bakersfield is home to one of the fastest flowing rivers west of the Mississippi, the Kern River, where white water rafters enjoy the great outdoors. Others enjoy leisurely walking, biking and roller-skating activities along the Kern River Parkway, extending nearly 20 miles along the banks of the Kern River. For more about this community and area, visit

Qualified candidates please submit your resume and any links to projects or open source contributions that you have made ASAP for review.

We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          System Administrator / DevOps   
<span>We are looking for an amazing system administrator who loves automation and cloud infrastructure. &nbsp;While our software installs on premise we have a vast testing lab running multiple flavors of linux and windows.<br>&nbsp;<br><B>Responsibilities:</B><br><ul>
<li>Build, monitor, and maintain production Linux systems. </li><li>Build, monitor, and maintain automated infrastructure to allow automated integration testing. </li><li>Resource for all things server and infrastructure. </li><li>We promise you don&#39;t have to deal with desktop user issues unless you want to. </li><li>Bonus work available if our LAN is something that interests you.</li></ul>
<li>Bachelors in Computer Science or in related field preferred. </li><li>Experience in at least one programming language. </li><li>3+ years experience with Linux (Redhat/CentOS and Ubuntu) </li><li>2+ years experience with Windows Server </li><li>2+ years experience with at least one commercial database. </li><li>2+ years experience with at least one open source database. </li><li>3+ years experience with Chef. </li><li>Experience with Docker a nice to have. </li><li>Experience with agile workflows and continuous integration. </li><li>Experience with JIRA is preferred</li></ul>
&nbsp;<br>Apply now for immediate consideration!<br>&nbsp;<br>&nbsp;<br></span>
          DynamoDB + Rake + Maven + Rack   

In, one of my Ruby pet web apps, I'm using DynamoDB, a NoSQL cloud database by AWS. It works like a charm, but the problem is that it's not so easy to create an integration test, to make sure my code works together with the "real" DynamoDB server and tables. Let me show you how it was solved. The code is open source and you can see it in the yegor256/sixnines GitHub repo.

How to Bootstrap DynamoDB Local

First, you need to use DynamoDB Local, a command line tool created by AWS exactly for the purposes of testing. You need to start it before your integration tests and stop it afterward.

          Building Slack Bots With IBM Watson Conversation   

I’ve open sourced a simple sample that shows how to leverage IBM Watson Conversation in Slack bots via the open source project Botkit. With Botkit and a Watson middleware, text messages defined in Conversation dialogs can easily be used in Slack bots. My sample shows, additionally, how to use Slack buttons in messages and how to invoke business logic at certain stages of the conversation.

Botkit is an open source framework to build bots that can be connected to popular messaging platforms like Slack and Facebook Messenger. IBM provides a middleware to easily leverage the conversation flows defined in Watson Conversation dialogs. The following code shows how to pass user input to Watson and how to return text messages to Slack.

          I have no good news   
Awhile ago I found a link to an RSS reader called Good News on this site. It was fun while I had it, but thanks to a reformat due to overly gregarious bluescreens, I lost the installer. It's one of those independent open source thingers with an address buried deep on a personal website, if I remember right, and its name is making it impossible to search for. Could anyone help me find it again?
          Mozilla Firefox for Windows (32-bit) 54.0.1   

Mozilla Firefox is a free and open source Web browser descended from the Mozilla Application Suite and managed by Mozilla Corporation. Firefox is the second most widely used browser.

To display web pages, Firefox uses the Gecko layout engine, which implements most current web standards in addition to several features that are intended to anticipate likely additions to the standards.

Copyright Betanews, Inc. 2017

          Mozilla Firefox for Windows (64-bit) 54.0.1   

Mozilla Firefox is a free and open source Web browser descended from the Mozilla Application Suite and managed by Mozilla Corporation. Firefox is the second most widely used browser.

To display web pages, Firefox uses the Gecko layout engine, which implements most current web standards in addition to several features that are intended to anticipate likely additions to the standards.

Copyright Betanews, Inc. 2017

          Mozilla Firefox for Mac OS X 54.0.1   

Mozilla Firefox is a free and open source Web browser descended from the Mozilla Application Suite and managed by Mozilla Corporation. Firefox is the second most widely used browser.

To display web pages, Firefox uses the Gecko layout engine, which implements most current web standards in addition to several features that are intended to anticipate likely additions to the standards.

Copyright Betanews, Inc. 2017

          Mozilla Firefox for Linux 54.0.1   

Mozilla Firefox is a free and open source Web browser descended from the Mozilla Application Suite and managed by Mozilla Corporation. Firefox is the second most widely used browser.

To display web pages, Firefox uses the Gecko layout engine, which implements most current web standards in addition to several features that are intended to anticipate likely additions to the standards.

Copyright Betanews, Inc. 2017

          Open source archive for 44.1.A.0.184   
Open source download for Xperia™ Touch (G1109); software version 44.1.A.0.184
          Open source archive for 34.2.B.0.247   
Open source download for Xperia™ X Compact (SO-02J); software version 34.2.B.0.247
          Open source archive for 39.2.C.0.266   
Open source download for Xperia™ XZ (SOV34), Xperia™ X Performance (SOV33); software version 39.2.C.0.266
          Open source archive for 39.2.D.0.269   
Open source download for Xperia™ XZ (601SO) and Xperia™ X Performance (502SO); software version 39.2.D.0.269
          Open source archive for 39.2.B.0.288   
Open source download for Xperia™ XZ (SO-01J); software version 39.2.B.0.288
          Open source archive for 36.1.A.0.179   
Open source download for Xperia™ XA Ultra (SS F3211 F3213 F3215) (DS F3212 F3216); software version 36.1.A.0.179
          Open source archive for 39.2.B.0.292   
Open source download for Xperia™ X Performance SO-04H; software version 39.2.B.0.292
          Open source archive for 34.3.A.0.194   
Open source download for Xperia X (F5122, F5121), Xperia™ X Compact (F5321); software version 34.3.A.0.194
          Open source archive for 45.0.B.2.95   
Open source download for Xperia™ XZ Premium (SO-04J); software version 45.0.B.2.95
          Open source archive for 43.0.A.4.46   
Open source download for Xperia L1 (G3311, G3312 and G3313); software version 43.0.A.4.46
          GNOME.Asia Summit 2017 to be hosted in Chongqing China   
The GNOME.Asia Committee is proud to announce that the upcoming GNOME.Asia Summit 2017 will be hosted  in Chongqing, China on Oct 14-Oct 16. The GNOME.Asia committee has decided on the city of Chongqing because it uniquely represents an important theme: open source without any restriction of time, space, or location. Chongqing is located in the […]
          .Net Developer needed for high- ASP.NET, C#, Stored Procedures   
Cuyahoga Falls, Based in Hudson, Ohio, we are a top-tier company in our industry, always on the cutting edge of tech with our luxury product line. Due to continued success we are looking for a talented .Net Developer to help support our internal applications, reporting and intranet site. We have made several contributions to the open source community while maintaining a fast-paced, dynamic, fun work environment.
          Sr. Database Devops Engineer - Tesla Motors - Fremont, CA   
Explore and use open source tools to help detect DB issues proactively. Work closely with DB team members to automate Database provisioning including setting...
From Tesla Motors - Thu, 02 Mar 2017 06:23:24 GMT - View all Fremont, CA jobs
          Może ktoś coś wie o takiej licencji   

@brejdus pisze:


Proszę o pomoc w temacie licencjonowania. Może ktoś coś wie w temacie.
Napisałem program na system Linux. Chciałbym ograniczoną wersję (free) umieścić dostępną w necie natomiast wersja pełna (płatna) byłaby dostępna poprzez sprzedaż klucza HASP. Ponieważ program nie jest prosty chciałbym aby do pobrania umieścić nie wersję instalacyjną a obraz wirtualnego systemu (np. Vmware, Virtual PC etc. ) z zainstalowanymi odpowiednimi komponentami. I taka wersja mogłaby pracować u klienta. Z wiadomych względów nie będzie to „Open Source”
I stąd moje pytania:
- czy to dobry pomysł aby umieścić cały obraz systemu (nie będę miał problemów ?)
- jaka to będzie licencja i co zrobić z licencjami LINUX-a (Ubuntu server) i modułów, tak aby się nikt nie przyczepił
- gdzie najlepiej umieścić taki program do pobrania – bo np. SourceForge to chyba się nie kwalifikuje.

Dziękuję za wszelką pomoc

Wpisy: 2

Uczestnicy: 2

Przeczytaj cały temat

          Principal SDE Lead - Microsoft - Redmond, WA   
Experience with open source platforms like node.js, Ruby on Rails, the JVM ecosystem, the Hadoop ecosystem, data platforms like Postgres, MongoDB and Cassandra...
From Microsoft - Thu, 29 Jun 2017 10:48:18 GMT - View all Redmond, WA jobs
          SDE - Microsoft - Redmond, WA   
Knowledge of open source platforms like node.js, Ruby on Rails, the JVM ecosystem, the Hadoop ecosystem, data platforms like Postgres, MongoDB and Cassandra and...
From Microsoft - Sat, 01 Apr 2017 03:13:29 GMT - View all Redmond, WA jobs
          Protokoll der Lenkungsausschusssitzung vom 05.07.2007   
1.Status Projekt:

Es hat sich gezeigt, dass die Ausgestaltung des Migrationskonzepts erst möglich wird, wenn das IDM-System weitestgehend ausgestaltet, also die Integrationstests für das neue System abgeschlossen sind. Migration bedeutet dabei, nach den Ausführungen von Herrn Dr. Rygus, dass es gilt, die einzelnen Systeme schrittweise aus dem bestehenden Verzeichnisdienst herauszulösen. Dabei gilt es unter Orthogonalisierung der Daten und ständiger Konsistenzprüfung die bestehenden Abhängigkeiten abzuschaffen und die alten Systemverbindungen nach und nach abzuschalten.

Schon jetzt wird aber deutlich, dass folgende Funktionen des bisherigen Verzeichnisdienstes nicht durch das IDM abgedeckt werden können:

- Verwaltung sowie Abrechnung der Druckkonten

- RRZE-Rechungswesen

Gerade beim letzteren Punkt heißt dies, dass geprüft werden muss ob HIS FSV eine gangbare Alternative bietet oder ob die Einführung eines Customer-Relationship-Management-System projektiert werden muss.

Die UserApp-Schulung vom 14. - 16.05. hat drei wesentliche Kritikpunkte an der Novell User Application zu Tage gefördert, die bis zur Sitzung nicht durch Novell geklärt wurde. Deshalb wurde diese nochmals angesprochen:

1. Es gibt für die UserApp keine durch Kunden verwendbaren APIs.

D.h. alle Zugriffe eigener Portlets auf das zugrundeliegende eDirectory erfolgen direkt via LDAP.

Herr Adam sah diese Situation ebenfalls als unvorteilhaft. Allerdings betonte er, dass die UserApp und die zugrunde liegenden Novell Produkte nicht als eigenständiges Portal positioniert seien und daher nicht offiziell dokumentiert wurden.

Eine offizielle Dokumentation wird mit der UserApp 3.5.1 voraussichtlich im Oktober verfügbar sein, was aber für IDMone zu spät wäre, um seine Entwicklungen darauf abzustellen. Daher wird Herr Adam versuchen die Novell-interne Dokumentation zeitnah zur Verfügung zu stellen. Diese wird zwar nicht sehr ausführlich sein, sollte aber unter Einbeziehung des Consultants Wolfgang Schreiber zu wiederverwendbaren Ergebnissen führen.

2. Durch die fehlende APIs haben Kundenportlets kein Zugriff auf das Directory Abstraction Layer (kurz DAL).

Daraus folgen weitreichende Konsequenzen! Alle dort definierten Beziehungen, z.B. Employee-Manager-Beziehung zum Aufbau der Organisationsstruktur, können vom integrierten OrgChart-Modul verwendet werden. Eigene Portlets bleiben außen vor.

Für Herrn Adam stellt dies eine wesentliche Funktion der Portalintegration dar. Novell wird demnächst auch einige kommerzielle Portale unterstützen.

Diesbezüglich wies Herr Eggers darauf hin, dass in Universitäten derzeit Portale unter einer Open Source Lizenz bevorzugt werden. Die allgemeine Diskussion dreht sich um Liferay und uPortal . Um die flexible Integration zu gewährleisten, stehen für Universitäten die Standardkonformität im Fordergrund.

Herr Adam berichtete, dass aus Sicht von Novell der JSR168 sich als Sackgasse erwiesen hat, da er nicht alle benötigten Funktionen unterstützt. Ob der Nachfolge-Standard JSR286 Berücksichtigung findet blieb offen. Auch konnte nicht geklärt werden, welche Single-Sign-On-Lösungen (ausgenommen Novell iChain) kompatibel sein werden. Novell arbeitet derzeit an einer Integration auf AJAX-Basis.

Aus diesen Gründen wird Herr Adam einen Call mit dem IDM Produktmanagement organisieren in dem diese Fragen erörtert werden können.

Aus Sicht des RRZE sollte dieser Call nach dem 20.07.2007 statt finden, da vorher noch interne Klärungen notwendig sind.

3. UC1203 Affiliation auswählen

Herr Schreiber konnte keine Lösung bieten (fehlende Schnittstellen). Damit ist immer noch nicht geklärt, ob Personen zwischen Ihren verschiedenen Affiliations wechseln können.

Herr Dr. Rygus machte deutlich, dass die fehlende Möglichkeit in der laufenden Session zwischen den Affiliations wechseln zu können, einen erheblichen Rückschritt gegenüber der derzeitigen Weboberfläche darstellt.

Herr Adam sieht zwar etwaige Möglichkeiten bei einer Änderung des Datenmodells, aber da die Consultants davon abgeraten haben, wird wohl auch dieser Punkt mit dem IDM-Produkt-Management zu diskutieren sein. Eine Lösung ist kurzfristig nicht in Aussicht

In der Zeit vom 18.05. bis 08.06.2007 ruhte die Arbeit bei IDMone wg. Urlaub.

Einzig das Novell Campus-Treffen in dieser Zeit ist erwähnenswert, da es nicht gelang den Kunden das Umstellungsrisiko bei Novell eDirectory bewusst zu machen. Besonders die Migration der Zenworks-Pakete von Version 3 auf aktuelle Version wird einige überraschen.

Es scheint angeraten zu sein, die Novell Anbindung komplett an Novell Consulting out zu sourcen.

Gegenstand des Auftrags wird der Aufbau einer eDirectory 8.8 Enklave in einer eDirectory 8.7.3 Umgebung mit Provisionierung der Enklave aus dem IDM sowie Dokumentation und Integrationstests sein.

Herr Adam wird Herrn Orscheidt zu einem zeitnahen Angebot veranlassen.

Bezüglich des Consultings sah sich das RRZE zu Kritik an der Dokumentation von Herrn Orschiedt veranlasst.

So steht seit über 10 Wochen ein Protokoll eines Kundengesprächs bezüglich der Novell-Anbindung sowie der darauf basierenden Aufwandsschätzung aus. Auch sind Zwischenberichte wie die von Herrn Klasen nach OnSite-Terminen wünschenswert.

Novell empfiehlt daher nach einiger Beratung bei zukünftigen Terminen Zusatzzeiten für Dokumentation durch Herrn Orschiedt einzuplanen da dieser in diverse Projekte eingebunden ist und somit nicht über die notwendigen Pufferzeiten verfügt.

Desweiteren haben die vergangen Wochen der praktischen Arbeit gezeigt, dass die Programmierung der Treiberlogik deutlich aufwendiger ist als geplant.

Die Umsetzung der theoretischen Konzepte stellt sich als sehr arbeits- und abstimmungsintensiv dar.

Auch haben Urlaubszeit und Schulungen als Unterbrechung negative Auswirkungen auf den Projektfortschritt gehabt und auch technische Kleinprobleme trugen messbar zur Verzögerung bei.

Als Beispiele wurden zwei Systemanbindungen eingehend diskutiert.

Zum einen war dies die Anbindung des Quellsystems HIS.

Hier hat das Kundengespräch eine Anbindung durch den Austausch einer zentralen dll als gewünschte Alternative ergeben.

Allerdings verfügt das RRZE nicht über einen dll-Programmierer. Ein Outsourcing kommt wegen fehlender Mittel nicht in Frage. Und die HIS hat starke Bedenken gegen die Lösung bezüglich Antwortverhalten und Systemstabilität vorgebracht. Allerdings verfügt die HIS derzeit auch nicht über die Ressourcen die dll nach den Vorstellungen von IDMone neu zu gestalten.

Es wurden diverse Alternativen ausführlich diskutiert, was in folgendem Ergebnis mündete:

1. Novell Consulting wird um eine schriftliche Stellungnahme per E-Mail gebeten, dass die angestrebte Lösung von Seiten des IDM die Anforderungen an Reaktionszeit und Stabilität erfüllen kann.

2. Liegt diese Stellungnahme in positiver Form vor, wird Krasimir Zhelev eine Aufwandschätzung für die dll-Programmierung anfertigen.

3. Parallel zu dieser Analyse wird nochmal Kontakt mit der HIS gesucht um eine Lösung zu finden, die die Projekt-Ressourcen möglichst wenig belastet.

4. Schlagen alle Versuche fehl die geplante Lösung umzusetzen, wird folgender Alternativplan erzeugt.

4.1. Die bestehende dll findet weiter Verwendung. Die generierte Kennung dient nur noch der Aktivierung.

4.2. Der Ausdruck der E-Mail-Adresse auf dem Leporello wird unterbunden. Die von HIS SOS genereierte Kennung wird mit dem Hinweis "Aktivierungskennung" auf das Leporello gedruckt. Außerdem bekommen die Studierenden ein Hinweisblatt zur Aktivierung mit den damit potentiell verfügbaren Dienstleistungen sowie der RRZE-Nutzungsordnung.

Es wird notwendig sein, Anreize für die Kennungsaktivierung zu schaffen. Hierzu ist die RRZE-Redaktion sowie die Marketing-AG der FAU einzubeziehen.

4.3. Die temporäre Kennung wird bei der Anmeldung durch eine semantik-frei (IDM-konforme) Kennung ersetzt und die Studierenden erhalten die Möglichkeit sich einen Loginbrief mit allen wesentlichen Daten auszudrucken bzw. als pdf abzuspeichern.

4.4. Eine Rücklieferung der aktivierten Kennung an den Datensatz in HIS SOS bleibt zu prüfen.

Die zweite Systemanbindung mit erheblichen Veränderungen betrifft das E-Mail-System.

Hier ist entgegen der Projektannahme ein Wechsel der E-Mail-Software abzusehen, da der Hersteller des bestehenden Systems das Produkt abgekündigt hat. Aus diesem Grund wurde nochmals das Gespräch mit der E-Mail-Gruppe gesucht, die davon überzeugt werden konnte, dass eine vollständige Provisionierung aus dem IDM eine Entlastung von Aufgaben der Benutzerverwaltung und somit erhebliche Arbeitsersparnis mit sich bringt. Allerdings muss nun die Schnittstelle zwischen IDMone und dem E-Mail-System sehr exakt definiert werden. Dies bedeutet einen erheblichen Mehraufwand gegenüber der bisher geplanten Anbindung. Genau läßt sich dieser aber erst im Laufe des Prozesses quantifizieren.

Herr Lippert stellt die aufwändigere Anbindung zu Diskussion, da diese nicht mit einer Ressourcenverstärkung einher geht und somit mit einer Verzögerung des Projekts zu rechnen ist. Herr Eggers machte an dieser Stelle deutlich, dass das IDM mit der Anbindung des E-Mail-Systems steht und fällt und dass diese Anbindung zum Projektstart als essentieller Bestandteil des Release 1 definiert worden war. Er bat darum politische Termine gegen den milden Leidendruck eines mehrheitlich funktionierenden Verzeichnisdienstes gegenüber zu stellen, was durchaus die Möglichkeit zur umfassenden Umsetzung bietet.

Der Lenkungsausschuss folgte zwar mit erheblichen Bedenken dieser Position.

Herr Dr. Rygus wies außerdem daraufhin, dass die anstehende Organisationsstrukturänderung fundamentale Änderungen am IDM mit sich bringen wird, da entgegen ursprünglicher Konzepte der Administrationsbaum doch in einer hierarchischen Struktur aufgebaut werden muss. Veränderungen in der Struktur bringen somit einen erheblichen manuellen Aufwand mit sich, zumal Novell Consulting von einem Programm zur Automatisierung von Organisationsstrukturveränderungen - dem sogenannten Move-Proxy - abgeraten hat. Der Neuordnungsprozess nach der Strukturreform gestaltet sich allerdings auch derzeit so langsam, dass mit verwendbaren Ergebnissen nicht vor dem Release 1 gerechnet werden kann. Gespräche mit Herrn Dr. Steinhäußer bzgl. einer frühzeitigen Informationsbereitstellung laufen jedoch.

Herr Orschiedt wird um eine nochmalige Aufwandsschätzung und Diskussion des pro-und-contra bzgl. des Move-Proxy gebeten.

Insgesamt lässt sich festhalten, dass der Termin 16.07.2007 für ein komplettes Integrationssystem NICHT ZU HALTEN sein wird.

Stattdessen strebt das Team nun an zum 18.07.2007 ein Integrationssystem mit HIS, DIAPERS sowie LDAP fertig zu stellen.

Hierfür wäre es förderlich, wenn ein DBA einen halben Tag pro Woche zur Verfügung stünde. Herr de West wird um eine Lösung gebeten.

Bis zu diesem Zeitpunkt wird auch ein aktualisierter Projektplan vorgelegt.

Herr Lippert berichtet, über den Status der Herstellung der Barrierefreiheit bei der UserApp, dass Novell das Unternehmen DIAS mit einer BITV-Zertifizierung beauftragen werde. Hierbei sei allerdings zu berücksichtigen, dass die BITV sich auf statische Inhalte beziehe und die UserApp erhebliche dynamische Teile beinhalte. Dennoch sei man positiver Stimmung, eine ausreichende Punktzahl zu erreichen.

Herr Eggers beschrieb die Planung der Lasttests mit den Unis Passau und Würzburg als derzeit eingeschlafen, weil alle mit ihren eigenen Projekten beschäftigt seien. Nach der Verfügbarkeit des ersten Integrationssystems will er wieder den Kontakt suchen und einen neuen Anlauf für Novell.IDM@Bayern starten.

Desweiteren wurde der Kostenbericht #8 von Novell kurz erläutert.

2. Risiko Management - Review der Top Risiken / offene Punkte

Da alle Punkte bereits unter TOP 1 angesprochen wurden, sei dieser Ausschnitt aus dem Risiko-Management nur der Vollständigkeit halber erwähnt.

a) Kategorie Blocker:


Novell Lizenzen (JDBC) wurde am 06.07.2007 durch Novell (Volkmar Reiss) gelöst

b) Kategorie Critical:

Ablösung der bisherigen Benutzerverwaltung bis 2007

Barriefreie Weboberfläche Novell Front-End

Abbildung der Organisationsstruktur


Anbindung RRZE-Abrechnung

3. Ausblick bis nächster Termin / Nächster Termin

Bis zum 18.07.2007 wird ein aktualisierter Projektplan vorgestellt.

Die nächste Sitzung des Lenkungsausschusses findet am 19.09.2007 statt und hat vor allem den Bericht an das StMWFK zum Gegenstand.

Die genaue Uhrzeit bestimmt sich nach dem Flug von Herrn Adam

4. Review Projektorganisation / Rollen und Verantwortlichkeiten

Aus Sicht von Novell hat sich die Rolle des Projektcoach erübrigt. Nach einer intensiven Einbindung zu Beginn des Projekts war die Beteiligung und der Beratungsbedarf immer weiter zurück gegangen. Herr Lippert scheidet daher aus dem Projekt aus.

Überraschend teilte Herr Lippert mit, dass er auch Novell verlassen werde um sich neuen Herausforderungen zu stellen.

Das RRZE und IDMone bedankt sich recht herzlich bei Herrn Lippert für die geleistete Arbeit. Er war eine echte Stütze und hat den guten und flotten Projektstart erst möglich gemacht! THX!

Herr Dr. Rygus und Herr Eggers haben ihre Rollen und den Verantwortungsbereich differenziert.

Herr Dr. Rygus ist damit Projektleiter von IDMone.

Herr Eggers wird sich auf seine Arbeit in der Stabsstelle Projekte & Prozesse konzentrieren, aber in Querschnittsaufgaben immer noch aktiv an IDMone mitwirken.

          Print your own aquaponics garden with this open source urban farming system   
Aquapioneers has developed what it calls the world's first open source aquaponics kit in a bid to reconnect urban dwellers with the production of their food.
          Cyberduck for Windows   
Cyberduck for Windows is an open source software which can connect to FTP (File Transfer Protocol), SFTP (SSH Secure File Transfer), WebDAV (Web-based Distributed Authoring and Versioning), Amazon S3, Google Cloud Storage, Windows Azure, Rackspace Cloud Files, and Google Docs to distribute your file ...
Continue reading

          New Test Automation of the octoBox STACK Wireless Testbed Improves Test Coverage and Speeds up MIMO Over-the-Air Throughput Measurements   

New powerful test automation software controls throughput vs. range and antenna orientation measurements using open source iPerf in the octoBox STACK wireless testbed; produces graphical test reports.

(PRWeb December 01, 2014)

Read the full story at

          Cloud hybride : quels outils pour le développeur ?   

Les environnements informatiques sont fondamentalement hétérogènes par nature d’abord parce que les capteurs et sources d’informations, systèmes (virtualisés) et personnes sont répartis partout sur le globe, mais aussi parce que l’évolution technologique est permanente et devrait se poursuivre.

Grâce à l’inexorable succès de Linux et des technologies ouvertes ‘open source’, le Cloud public est devenu une évidence et en même temps la quantité et la qualité des données a imposé une réalité : déplacer les données s’est vite avéré être à la fois coûteux et limité voire impossible.

Tout ne va pas passer par le siphon du Cloud public : passons en revue les stratégies hybrides, celles d’interfaces, de déploiements complets et via containers ; examinons, enfin, comment les approches de développement contribuent à l’agilité totale.

Lire la suite dans notre Espace IBM

Eric Aquaronne
Responsable de la stratégie Cloud pour les plateformes matérielles IBM
cloud, DevOps
Mots clés Google: 

          CMSIS-DAP work: implemented raw JTAG support, and ported the HID firmware to Pro Micro and Teensy 3.2   

While I am so happy to have my J-Link back, the couple of weeks without it have been very productive in terms of open source contributions. After finding out that OpenOCD didn't support raw JTAG mode on CMSIS-DAP adapters, I bit that off as a potential project, and eventually got it working, then did some performance tuning, and I'm pretty pleased with it now. With my LPC-Link2, it can program the flash in an ATMEGA32U4 over JTAG at about 1/4-1/3 of the speed of the J-Link (which is kind of a speed demon). I'm going to let it soak on GitHub for a while, then clean it up and submit it to OpenOCD once it's had a bit of scrutiny.

Clone myelin/openocd and check out the cmsis-dap-jtag branch to try it out.

Implementing this required getting pretty familiar with the CMSIS-DAP source code and protocol, and at some point I realized that it wouldn't be super hard to port the SWD/JTAG debugger part of the CMSIS-DAP firmware over to any USB capable microcontroller. Full CMSIS-DAP support requires the debugger part, plus a serial port and a mass storage device emulator that's also capable of flashing a .bin file to a chip, but I'm not doing this to provide a USB interface to a custom board (like the ones on the mbed site), so I've skipped those two. I'll probably add in the serial port sometime, because I wrote serial bridge code for teensy-openocd-remote-bitbang already.

This was mainly a matter of getting rid of Keil-specific code, plus a small amount of debugging:

  • The _OUT functions (e.g. PIN_SWDIO_OUT) take a uint32_t with a boolean value in the LSB, but often junk in the higher bits.
  • A 32-bit processor is expected, so there was one point where I needed to add (uint32_t) casts to avoid losing the high bytes of a word.

So here's CMSIS-DAP firmware for your Pro Micro (ATMEGA32U4) or Teensy 3.2 (MK20DX256) board.



          Debugging the ESP8266 with JTAG -- breakout board with an SWD-style JTAG connector   

I found out a few days ago that it's possible to debug the ESP8266 using JTAG (and the Xtensa-specific xt-ocd, or the open source OpenOCD). It looks like work to make this possible has been going on for about a year now, as well as related work to develop UART-based GDB stubs (there's now an official one!).

This doesn't seem to have attracted nearly as much attention as I would have thought. Maybe because hobbyists nowadays are used to the Arduino platform, which doesn't include any on-chip debugging support? (Unless you cut some traces and plug in an Atmel ICE unit.) The ubiquitous availability of on-chip debugging is probably my favourite thing about working with ARM Cortex-M chips; it's kinda painful to go back to an environment where I don't have access to that. I'm currently working on some DMX512 hardware, and recently had a yak-shaving-like need to implement my own non-blocking software UART; this wouldn't have been possible without being able to breakpoint and single-step using KDS, SWD, my J-Link, and my Saleae Logic 8.

Anyway, I'm super excited to give this a try on an ESP8266. Unfortunately all my ESP-03 modules are soldered into boards that tie GPIO15/MTDO to ground, and otherwise I only have a couple of ESP-01 units, which don't bring out the JTAG pins.

So... time to get an ESP-12 or two, and whip up a board with all the bits and pieces JTAG requires!

JTAG is an old interface/protocol, and is designed to daisy chain through a bunch of chips, so it has much more in the way of pullup/pulldown requirements than SWD. Here's what I ended up putting on my board:

- Pulldown on TDO. This should really be a pullup, except that TDO/GPIO15 is part of the boot_sel combo (GPIO15:GPIO0:GPIO2), which has to be 011 to boot from flash, so GPIO15 must be pulled low if you ever want to boot without an attached debugger.

- Pulldown on TCK. This is also nonstandard; most JTAG diagrams show no pull resistors on TCK, but have it terminated with 68R and 100pF in series to ground. ARM recommends a pulldown, though -- to avoid spurious clock edges during hot-plugging -- so I'm going with that.

- Pullups on TDI and TMS. This is standard JTAG.

- Pullups on CHIP_EN and /RESET -- always required on the ESP8266.

I chose to use an SWD-style 2x5 1.27mm connector, which will hopefully let me connect this to my J-Link using the same cable that I use for ARM debugging. Here's how the board looks:

Sending this off to OSHPark shortly! Design files and most recent gerbers are on GitHub.


          Serial Wire Debug - properly wiring up a Cortex-M debug connector, and debug adapter/software thoughts   

One of the things that confused me the most when moving between the AVR and ARM Cortex-M worlds was what debug/flash interfaces to support. The LPC11U1x series has a UART based flash download method (shared with the LPC810, according to this Adafruit tutorial), and the LPC11U2x/3x chips add in various USB options. STM32F chips have their own method, and I've seen an I2C downloader for WLCSP (very small) versions of Freescale Kinetis chips.

All of these are red herrings! The only interface you need is SWD, an ARM standard that gives you a connection right into the processor bus -- i.e., pretty much full control of the chip. It's active all the time unless you've disabled the SWDIO and SWCLK pins or assigned them to other duties, and has very good open and closed source support across the board.

This PDF from ARM explains the connector you need. It's a 2x5 header with 1.27mm (0.05") pitch. One row of pins for power/ground, another for data. There are some gotchas in there, and here is what I've figured out through trial and error. First, the power row:

1 - VCC - connect this to your MCU power net to get convenient power right from the debug adapter. You might want to put a jumper in series if you'll sometimes want to self-power the board, or leave this disconnected entirely if you'll *always* have the board self-powered.

3 and 5 - GND - connect to your ground net.

7 - KEY - leave this unconnected

9 - GNDDetect - this is for target boards to detect the presence of a debugger. I always leave it unconnected, but if knowing a debugger is connected is useful to you, put a pullup to VDD and feed this into a GPIO.

Now, the data row:

2 - SWDIO - connect this to your MCU's SWDIO pin. Pull-up/down resistors are usually unnecessary here, although check your datasheet to be sure.

4 - SWCLK - connect this to your MCU's SWCLK pin. Add a pull-down resistor to ground, between 10-100k.

6 - SWO/TDO - connect this to SWO (Serial Wire Output) on your MCU if you're using a Cortex-M3/M4. Cortex-M0/M0+ chips don't implement SWO, so I like to connect it to TXD on a spare UART in that case, which gives me a convenient serial console over the debug connector (I have a board that breaks out the UART pins separately so they don't go to the debugger).

8 - NC/TDI - either leave this unconnected, or connect it to RXD on a spare UART if you like my UART multiplexing trick.

10 - nRESET - connect to your MCU's /RESET pin. This usually needs a 10-100k pullup to VDD.

That's the electrical side of things sorted. Now, the physical. You'll find that SMD and thru-hole 0.05" pitch connectors are surprisingly expensive compared to 0.1" headers. After buying a bunch of $0.70 SMD 2x5 connectors and $0.30 thru-hole ones, I found a vendor on Aliexpress who was selling 10-packs of 2x50 connectors (of either type) for about $8, which can easily be chopped up into whatever length you like. This brings the cost of a 2x5 connector down to about $0.10, which seems much more reasonable.

Finally, debug connectors and software. Each chip family, and sometimes each chip, has a different flash controller, so you'll find that you need software for your particular device before you can erase, flash, or debug it. Here's what I've gathered so far:

- The SEGGER J-Link is a very very flexible debug adapter, that supports pretty much any Cortex-M chip, and is supported by pretty much every IDE. It will save you a ton of time to buy one of these, but you'll be paying $400 for a commercial-licensed version if you're doing anything other than educational/hobbyist work (in which case the $60 "EDU" version will do).

- CMSIS-DAP is ARM's standard for SWD over USB. Any CMSIS-DAP compliant adapter should work with any CMSIS-DAP compliant tool. As of now, there aren't a ton of these, but if you buy a development board, there's a good chance you'll be able to reflash the debug connector side of it with CMSIS-DAP firmware, and use it as a CMSIS-DAP dongle for any chip you can find software for (i.e., you're not limited by processor family any more). The firmware is open-source, available on GitHub.

- Free IDEs exist for most chip families. For Atmel ATSAM chips, there's the Visual Studio based Atmel Studio (which requires a J-Link, or the $50 ATMEL ICE adapter, to program chips). For Freescale Kinetis chips, you want the Kinetis Design Studio (which supports J-Link out of the box, or you can hack up a Freescale Freedom dev board and program it with the USBDM firmware). For NXP LPC chips, there's LPCXPresso (which supports CMSIS-DAP, J-Link, and NXP's $20 LPC-Link2, or you can hack up an LPCXPresso dev board).

- I'm a little confused about the STMF32 chips from STMicroelectronics. They have super cheap ~$10 "Discovery" dev boards, which come with an on-board ST-LINK/V2 debug adapter (available separately for ~$20), but as far as I can tell, don't have a vendor-supported free toolchain. However, there's a ton of open-source support, and you can put together a free toolchain without too much trouble. For flashing/debugging your chips, you'll want OpenOCD, which supports pretty much any debug adapter under the sun.

While we're talking about OpenOCD... it is an awesome open source project that I would love to use/contribute to if I find time. It seems like the Right Thing to do, allowing you to debug many kinds of chips with many kinds of adapter. I see it has support for some members of the Freescale Kinetis family, and it was able to connect to a MKE04Z8VTG4 using my LPC-Link2 with CMSIS-DAP firmware, but doesn't have support for the FTMRE flash controller there. Porting the flash code from USBDM into OpenOCD would be an excellent project, and would enable OS X support for more Kinetis devices. Both projects are licensed under the GPLv2.

That said, I'm probably going to buy myself a J-Link soon, because I really like the look of the Atmel ATSAM4S16B -- $5 in single quantities, 120 MHz Cortex-M4 with 1MB flash and 128k sram -- and I want to use Atmel Studio. So far I've been using a Freescale FRDM-KE04Z board with USBDM to debug my MKE04Z8VTG4 and MKE02Z64VLD2 based boards, and an LPC-Link2 to debug my LPC11U14 and LPC11U12 based boards.


          GitHub declara el viernes como el día del Open Source, el #OpenSourceFriday   
Hoy en día en Open Source está más que consolidado. Lo encontramos en todas partes, no solo en la computación profesional o geek, sino que hasta los usuarios de Windows usan más Open Source del que se imaginan. Sin embargo,[...]
          (USA-FL-Boca Raton) EAI Engineer Lead   
Responsibilities: The Enterprise Application Integration (EAI) group is responsible for leading, delivering, and supporting enterprise integration solutions for Office Depot. The EAI Engineer Lead is accountable for middleware component and infrastructure management practices, processes, and procedures and will identify efficiency and effectiveness levers that support, continuously improve, and coordinate architecture/solutioning, project delivery and leadership, and operations/support functions. This role will lead process improvement, analysis, planning, documentation, metrics, asset development, training, communication, and execution associated to all aspects of infrastructure and operational management. This person will demonstrate independent thinking and activities management, with ability to direct and influence practices, processes, and procedures in a coherent and consistent fashion, complementing EAI strategy and objectives. SUMMARY OF RESPONSIBLITIES: + Develop, manage, and ensure realization of tactical and strategic operations plans for the EAI Group + Develop, implement, and measure application and infrastructure processes, policies, methodologies, templates, standards, and procedures to meet goals for quality, time-to-market, and ROI/TCO + Participant and drive SDLC checkpoint reviews (peer, design, standards, etc.) for internal and external projects + Define and implement manual and automated practices related to implementation and support for EAI component + Provide quantitative and qualitative insights on progress and challenges on a periodic and ad hoc basis + Research and propose innovative methods to improve operations and support performance; provide thought and consultative leadership + Lead internal/external audit requests + Develops and maintains training plans for engineering and support practices + Lead engineering architecture planning and design for all related components + Evaluate project proposals and effort estimate to implement + Installation and configuration for environment builds including post build expansion + Product patching and analysis for implementation + Capacity planning and scaling analysis for all components + Create and maintain documentation as needed for future reference, knowledge transfer, and support turnover + Performance monitoring and identification of related issues + Manage engineering service requests and defects + Support code migration, automation and troubleshooting + Engage with other IT tech teams and request systems for troubleshooting or infrastructure dependencies related to project task completion + Provide guidance for support teams as relates to their assigned tasks + Initiate, coordinate, communicate and drive production changes via formal change management system + Take ownership of assignments, including vendor initiation and requests and drive them to conclusion + Implement monitoring and automation solutions as needed for existing or new components, Dynatrace experience is a plus. + Participate in 7x24 on-call rotation support for production + Assists with additional duties and responsibilities as assigned Qualifications: + Bachelor of Science in Computer Science, Information Systems, or equivalent. + Minimum of 6+ years of overall experience in information systems, including Java/J2EE development. + 3+ years of experience in application integration development and support of multi-platform technical environments within the integration/middleware space with products such as RedHat, Tibco, WebMethods or Oracle Fusion. + 2+ years of experience in conducting quality assurance practices, including testing, metrics definition, process development and improvement. + Strong self-management skills with ability to effectively collaborate with peers and senior management. + Prior technical leadership experience in an engineering capacity working on multiple infrastructure environments and projects. + Good understanding of IS/IT concepts across a broad spectrum which includes: application development, service-oriented architecture, and application integration. + Solid experience with automation frameworks, scripting languages, and test tools, including SoapUI, HP UFT/QTP, Selenium or similar. + Understanding of Software Quality Testing Approaches & Concepts - (e.g. API testing, test approach selection, Black, Grey, White Box test approaches, etc.). + Knowledge and experience with various test types - unit tests, volume tests, compatibility tests, integration tests, web-stress tests, system tests, etc. + Experience with the design & development of automation frameworks (including commercial automation testing tools, open source tools & scripting languages). + Experience with non-functional testing approaches (performance, security, accessibility, internationalization, etc) preferred. + Experience in EAI architecture and solutioning, application security, and project delivery. + Strong verbal and written communication skills to provide reports and documentation (e.g. test reports, test strategies, test plans, test cases, test reports, and bug tracking tools). + Strong knowledge of the basic principles, processes, phases and roles of application development methodologies (SDLC). + High proficiency in the use of quality management methods, tools, and technology used to create and support defect-free application software that meets the needs of the business partner. Other Information: + Strong knowledge with implementation and administration with RedHat Fuse/Apache Camel, Oracle SOASuite 11g & OSB 11g is a plus. + Implementation and configuration experience with integration adapters, such as with RedHat Fuse or Oracle Fusion adapters (Apps, DB, MQ, JMS, File, FTP, etc.). + Linux, Unix, Windows Server; AS400, z/OS is a plus. + Scripting knowledge (WLST, ANT) + Strong knowledge and experience with JVM monitoring/tuning executions + Highly self-motivated, self-directed, and attentive to detail. + Takes initiative with focus to fully complete task; excellent time management. + Excellent analytical, troubleshooting and problem solving abilities. + Strong written and verbal communication skills Pay, Benefits and Work Schedule: Office Depot and Office Max offers competitive salaries, a benefits package, which includes a 401(k) and more, along with plenty of opportunity to move and grow within our organization! For immediate consideration for this exciting position, please click the Apply Now button. Equal Employment Opportunity: Office Depot and Office Max is committed to providing equal employment opportunities in all employment practices. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, citizenship status, marital status, age, disability, protected veteran status, sexual orientation or any other characteristic protected by law.
          Eclipse gets ready for Java 9 with Oxygen release train   

The Eclipse Foundation’s annual release train, featuring simultaneous updates to dozens of projects, has just arrived, featuring preliminary Java 9 support. Called Oxygen, the release train covers 83 projects and includes 71 million lines of code.

Here are the key updates in Oxygen:

  • Java 9 support remains in beta stage, because Java 9 itself will not be made available until Java Development Kit 9 ships on September 21. Oxygen’s Java 9 support includes the ability to add the Java Runtime Environment for Java 9 as the installed JRE as well as backing for the Java 9 execution environment. Developers also can create Java and plug-in projects using Java 9 and compile modules that are part of a Java project. Eclipse’s signature Java IDE has been enhanced as well, with improvements to the UI.
  • Eclipse Linux Tools 6.0 updates Docker Tools with more security options. This project provides a C/C++ IDE for Linux developers.
  • Eclipse PDT (PHP Development Tools) 5.0 supports the 7.1 version of PHP, which offers nullable types and a void return type.
  • The Eclipse Sirius 5.0 platform for building domain-specific modeling tools, with usability enhancements.
  • Eclipse EGit 4.8.0, offering performance and usability for the Java implementation of Git code management integration for Eclipse.

Focused on open source tools, Eclipse has offered annual release trains every June since 2006, letting developers coordinate upgrades or new releases of multiple projects. Last year’s release train, Neon, offered tools for Docker and JavaScript. June 2018’s release is slated to be called Photon.

To read this article in full or to leave a comment, please click here

          AxCrypt 2.1.1513   

AxCrypt is a free, easy to use and open source file encryption tool for Windows 2000/2003/XP/Vista/2008/7 integrated into Windows Explorer. Encrypt, compress, decrypt, wipe, view and edit with a few mouse clicks. Cryptographic primitives are AES-128 and SHA-1. No configuration is necessary for AxCrypt just download it, run the installer and it's ready to go.

Thanks to Siddharta for the update.


          Principal SDE Lead - Microsoft - Redmond, WA   
Experience with open source platforms like node.js, Ruby on Rails, the JVM ecosystem, the Hadoop ecosystem, data platforms like Postgres, MongoDB and Cassandra...
From Microsoft - Thu, 29 Jun 2017 10:48:18 GMT - View all Redmond, WA jobs
          SDE - Microsoft - Redmond, WA   
Knowledge of open source platforms like node.js, Ruby on Rails, the JVM ecosystem, the Hadoop ecosystem, data platforms like Postgres, MongoDB and Cassandra and...
From Microsoft - Sat, 01 Apr 2017 03:13:29 GMT - View all Redmond, WA jobs
          The H Open Source   
          A Canvas of Data & Indian Card Industry   

Indian card industry has gone through several interesting changes in the recent past. We wanted to explore this fascinating world both from card issuing and merchant acquiring perspective within our chosen study period of around 6 years - from Q2, 2011 to Q4, 2016. However rather than hunting the data to prove any pre-defined notion we wanted to listen to the data to capture the story it wants to tell us. We took a canvas of raw primary data from a no. of sources ranging from Reserve Bank of India to World Bank, from Ministry of Finance to Ministry of Statistics & Implementation (of Govt. of India) and a no. of other sources like Yahoo Finance, Index Mundi data portal. As a choice of tool we have used R (an open source software) and Excel for our study. In order to uncover the underlying story behind the data we have used an array of techniques ranging from descriptive trend chart, heat map, multidimensional bubble plot, outlier chart to advanced data analytics techniques like clustering using machine learning algorithm. We also uncovered the correlations of card industry parameters with other economic and social indicators and went ahead in building optimum casual predictive model as well as time series forecasting model.

          How to Monitor your CentOS 7 Server using Cacti   

HowToForge: Cacti is a free and open source network graphing solution.

          How to Install Joomla with Apache on Debian 9 (Stretch)   

HowToForge: Joomla is one of the most popular and widely supported open source content management system (CMS) platform in the world.

          Jr.Php Developers-Freshers - Code Hub - Yanaikkal, Tamil Nadu   
Get trained &amp; Become a open source web developer in PHP/MySQL. St.Johns building 3rd floor,. Are you working in irrelevant industry but you have studied CS / IT...
From Indeed - Sat, 22 Apr 2017 11:31:34 GMT - View all Yanaikkal, Tamil Nadu jobs
          ویکیا سرچ: موتور جستجوی Open-Source   
کمپانی ویکیا جمعه ی گذشته با خریداری وب کرالر گراب گام بزرگی در جهت راه اندازی ویکیا سرچ که  بزرگترین موتور جستجوی Open Source است برداشته.این جستجوگر قرار است اواخر سال جاری راه اندازی شود.خط مشی این پروژه عظیم از ویکی پدیا سرچشمه گرفته و در اینجا هم مشارکت انسانی به جای الگوریتم های کامپیوتری ملاک عمل خواهد بود.طبق گفته جیمبو ولز بنیانگذار […]
          Oracle Launch Webcast - New Big Data Cloud Service: Big Data Cloud Machine - June 28, 19:00 CET   

During the 1-hour launch webcast on June 28 at 19:00 CET, key speakers from Oracle, Intel and Cloudera will discuss the latest Oracle Cloud at Customer offering empowering customers to seamlessly shift their big data and analytics initiatives to the cloud.

Join Paul Sonderegger, Big Data Strategist at Oracle, along with key speakers from Intel and Cloudera for an in-depth webcast to hear about the latest Oracle Cloud at Customer offering, empowering customers to seamlessly shift their big data and analytics initiatives to the cloud. With Oracle’s newest big data cloud service for Hadoop and Spark, enterprises can leverage their data to maximize profitability and drive a competitive advantage with their data capital.

Join us to hear:

  • How enterprises are gaining competitive advantages leveraging their data capital
  • How Oracle’s Big Data Cloud Service offering accelerates time-to-value 
  • Customers discuss their own Big Data success stories
  • Leading analytics partners discuss the value they bring to Oracle’s newest Big Data Cloud Service
  • About the broad open source and analytics portfolio support with Oracle’s new Big Data Cloud Service
  • How to start your Big Data journey with Oracle’s newest cloud offering

Don’t miss this launch webcast. Register today!

Libnids is a C library which works along with libnids, libnet, and libpcap. Install the latter from the Ubuntu repositories. Even though, to use this open source programming skills are necessary, it’s very worth it.
          Last Chance to Submit Your Talk for Open Source Summit and ELC Europe   

Submit your proposal soon to speak at Open Source Summit and Embedded Linux Conference (ELC) taking place in Prague, Czech Republic, October 23-25, 2017. The deadline for proposals is Saturday, July 8, 2017. Don’t miss out on this opportunity to share your expertise and experience at these events.

          Emotiv EPOC Headset Hacked   
Back in 2008 on this blog I reported on Emotiv epoc headset development and flagged its potential for thought control input. A few days ago H+ magazine blog featured an interview with Cody Brocious Cody Brocious has created Cody’s Emokit project, an open source library for reading data directly from the Emotiv EPOC EEG headset. […]
          Adding transformation passes to Traceur Compiler   
Traceur Compiler is an open source to ES5 compiler. We released it a while back but today I finished writing a tutorial about how to add new transformation passes. As you can see it is pretty easy to add new passes and all you need to know is JavaScript to prototype the future of more
          PrestaShop y Zen Cart, dos plataformas ecommerce de código abierto   
Zen Cart se presenta como otra plataforma más que podemos encontrar con la característica de ser open source. Está desarrollada de forma conjunta entre empresarios, diseñadores y programadores, porque nadie mejor que los propios usuarios para dar su visión sobre el comercio electrónico. Zen Cart en España, según el estudio de Xopie, no es de […]
          Cuáles son las ventajas de la nueva versión Prestashop 1.6   
Prestashop se ha convertido en una de las soluciones de comercio electrónico más completas para crear tiendas online. No en vano ha sido reconocida con el premio Open Source como la mejor en el mercado actual, contando hoy con más de 150.000 tiendas virtuales  realizadas con este programa. Esto se debe a que el asistente […]
          Oracle open sources new tools focused on containers   

Oracle is helping development teams build and operate containers with three new tools it is releasing into open source. The tools — Smith, Crashcart and Railcar — are designed to tackle containerization challenges commonly faced.   “Containers are more popular than ever. Here at Oracle, we use containers for many of our cloud services. While … continue reading

The post Oracle open sources new tools focused on containers appeared first on SD Times.

          NASA's systems for sharing code -

NASA's systems for sharing code
NASA has been creating code for decades and boasts more than 300 public open-source projects. The agency's challenge is not getting buy-in for open source so much as it is managing the enthusiasm for it.

          Super Shortcut   
Replace Several Applications Shortcut with One Super Shortcut.
It helps you to access your Favorite Apps Quickly & Empty your Home Screen to Enjoy your Wallpaper...

Simply Click on an Apps to Select & then Press on Confirm ✔️ Button to Create Shortcuts.
Then Move This App Shortcut to Home Screen & Delete other Shortcuts.

+ Smart Pick: App will create App Shortcut of Most Used Apps.

Tip: <b>Swipe Up on Confirm Button</b> to see the selected apps.

<b>Note</b>: Some Home Launcher shows Fewer Popup Shortcuts & There is different System Restriction for amounts of these Popup Shortcuts. You can see How many your device allows inside App.

* Facebook
* Google+
* Twitter
* Open Source Project of Floating Shortcut on XDA
* Please Write your Opinion to Improve current Features & to Add New Features
          Super Shortcut ᴾᴿᴼ   
Replace Several Applications Shortcut with One Super Shortcut.
It helps you to access your Favorite Apps Quickly & Empty your Home Screen to Enjoy your Wallpaper...

Simply Click on an Apps to Select & then Press on Confirm ✔️ Button to Create Shortcuts.
Then Move This App Shortcut to Home Screen & Delete other Shortcuts.

+ Smart Pick: App will create App Shortcut of Most Used Apps.

Tip: <b>Swipe Up on Confirm Button</b> to see the selected apps.

<b>Note</b>: Some Home Launcher shows Fewer Popup Shortcuts & There is different System Restriction for amounts of these Popup Shortcuts. You can see How many your device allows inside App.

* Facebook
* Google+
* Twitter
* Open Source Project of Floating Shortcut on XDA
* Please Write your Opinion to Improve current Features & to Add New Features

Recent changes:
/* Please Rate & Write Review */

+ Smart Pick: App will select most used apps & create their AppShortcuts
+ Few Graphical Interface Optimization
          TypeScript 2.4: Dynamic imports and weak types   
The TypeScript 2.4 release might be a minor update in terms of not requiring substantial changes within our open source work and customer projects, but it provides some major benefits that we are already leveraging throughout the Dojo 2 codebase. Dynamic import() The headline feature for TypeScript 2.4 is support for the ES.Next dynamic import() […]
          GitHub’s ‘Open Source Friday’ Wants Office Hours   

GitHub has a plan to get more contributions to the open-source community: it now wants you to spend your ‘20 […]

The post GitHub’s ‘Open Source Friday’ Wants Office Hours appeared first on Dice Insights.

          Cloud Engineer   
Ability to quickly learn and apply a wide variety of open source technologies and cloud services. Bachelor’ degree in Computer Science with at least 5 years of...

-- Delivered by Feed43 service

          Windows App Studio being sunset   

We want to directly thank each of the users of Windows App Studio and we want to be sure you have a smooth transition off when Windows App Studio service ends on December 1, 2017. What will happen to App Studio afterwards? Windows Template Studio is the evolution of Windows App Studio. We took our learnings from the code generation engine and the existing wizard to provide a strong foundation for our code generation and developer experience in Windows Template Studio. Best of all, it is open source over at

Details on the transition

Windows App Studio has been a free, online app creation tool that allowed enthusiasts and developers to quickly build complete Windows Universal Apps. Applications using Windows App Studio then could also be downloaded, extended and compiled with Visual Studio and submitted to the Windows Dev Center.

Any user of Windows App Studio will need to download your projects and data prior to December 1, 2017.

We’ll provide multiple email communications with users between now and December 1, 2017, but we want to be upfront and clear that you have a path forward to continue building great apps for Windows 10. We’re doing a phased approach with the sun setting process. Here are the three critical dates:

  • July 15, 2017
    • Only existing users can sign in
    • Finished application projects can be downloaded
    • No new dynamic collections data sources allowed to be created
    • Dynamic data will be allowed to be downloaded with a migration path provided
  • September 15, 2017
    • Application editor will stop working
    • Dynamic collections API will stop providing data to your existing applications
  • December 1, 2017
    • Windows App Studio will be shut down

Once again, we want to thank each of the users of Windows App Studio, and we view the smooth transition for users critical.

The post Windows App Studio being sunset appeared first on Building Apps for Windows.

          A review of GIMP 2.6.2   
GIMP continues to grow and is quickly becoming serious competition for Adobe Photoshop. This article reviews the latest version of GIMP and how it compares to Photoshop. It's time to start taking this free program seriously. Gimp 2.6.2 GIMP, the GNU image manipulation program, is basically the open source version of Adobe Photoshop. These two p...
          Linspire 6 - Blends the Best in Open Source and Commercial Software   
Linspire is the result of a blend of one of the most popular Linux desktop distributions, Ubuntu, with commercial software and codecs available right out of the box. As you'll see, Linspire is an interesting take on Linux desktop distributions. Introduction What happens when you blend one of the most popular Linux desktop distributions, Ubuntu,...
          Meet us at FOSDEM 2012   
This weekend (Feb. 4 - 5), FOSDEM, The Free and Open Source Developers European Meeting will be held at the university of Brussels and NetBSD will be present with a booth and there will be NetBSD related talks and presentations in the BSD devroom on sunday.

This is a good occasion to meet and discuss with NetBSD and pkgsrc developers, or, to use the occasion to buy NetBSD merchandise and / or to donate to the project.

For more details about FOSDEM, visit The schedule of the BSD devroom is available here.

          NetBSD participating in Google Summer of Code™ 2011   

Google Summer of Code is a program that offers student developers stipends for a 3 month programming project with the participating open source mentoring organization of their choice.

NetBSD is among the 175 projects chosen to be mentor organizations for Google Summer of Code 2011.

Look at the list of suggested projects, and if you are eligible to participate, hit the appropriate mailing list(s) to discuss those projects that appeal to you. If you have a project idea that is not listed: it's entirely allowed™ to propose your own project. Please also discuss your own project idea on the appropriate mailing list before applying for it.

If you're not eligible but know people who are and who would be interested in working on a project, but are too shy to apply: nudge them :)

And lastly: if you are eligible and want to participate, but the projects possible in NetBSD are really not in your current scope: there are 174 other worthy projects where the set of programming languages you know today may be highly welcome.

          NetBSD at the 17th LinuxTag in Berlin   
The 17th LinuxTag takes place from May 11th - 14th, 2011 in Berlin.

In the last years there were 10,000 - 11,500 visitors from all over the world in the Berlin Exhibition Grounds. There are a lot of exhibitioner all around open source software.

The slogan oft the LinuxTag convention is "Where .com meets .org". Not only established and ambitious free projects take part, but also companies, which support free software.

Every day the visitors can choose between different workshops, keynotes and lectures.

More about the programm you can read on

NetBSD is presented together with other BSD's.

If you staying in Berlin during this time, then visit the LinuxTag!

Do you want to get some more informations or help on the NetBSD-booth? Please, contact Thomas Kaepernick (mast_1 (at)

          NetBSD@FOSDEM 2011   
On the first weekend of february, FOSDEM, the biggest european open source developers gathering, was again held in Brussels, Belgium. With several thousand attendees from all over the world, though most from europe, FOSDEM is one of the highlights of the year.

NetBSD was very well represented with a booth, together with the FreeBSD folks, and a talk covering the recent addition of the Lua programming language to the base system.

Guillaume Lasmayous (gls@), Vera Hardmeier, and myself were almost constantly at the booth selling T-Shirts, CD-ROMs, and other merchandise and using the occasion for marketing NetBSD a bit and having technical discussions with NetBSD users (and prospective users, I hope).

During the BSD devroom I gave a talk "Lua in NetBSD", outlining language details, techniques to incorporate Lua into existing software, and also why Lua in NetBSD makes a lot of sense for certain applications. That talk was very well received and attracted a lot of people.

FOSDEM 2011 was a big success, again!

          Open Source Adreno Project “Freedreno” Receives New Update   
Users of Freedreno, the open-source graphics driver support for Adreno on Linux distributions, will be pleased to know that a new update has been released in the past week. Lead developer Rob Clark di ... - Source:
          GitHub Declares Every Friday Open Source Day And Wants You to Take Part   
GitHub is home to many open-source development projects, a lot of which are featured on XDA. The service wants more people to contribute to open-source projects with a new initiative called Open Sourc ... - Source:
          Benefits Offered by Ruby on Rails Development Services   

Ruby on Rails is an open source framework which allows the web developers to work with a flexible approach, enhancing the features of a web application. It is one of the most productive framework which allows the developer with the flexibility to control the framework.

          Why Choose Laitkor as RoR Web Development Partner?   

Ruby on Rails or RoR is an open source software which offers the programmer with an optimized and conventional way of configuration. It is different from other web application development frameworks in many ways.

          The Benefits of Using PHP With Laravel To Create Advanced Applications   

Laravel is an open source framework of PHP which follows Model View Control (MVC) model. This framework is used to by the developers to provide the website with components, design, and tools.

(Web Services Resource Framework) Web services for grid computing. WSRF defines conventions for managing 'state' so that applications can reliably share changing information. In combination with WS-Notification and other WS-* standards, the result is to make grid resources accessible within a web services architecture. Coupled with WS-Notification, the specification is a response to, and supersedes, the grid community's own first effort to converge grid and web services, the Open Grid Service Infrastructure (OGSI), which the Global Grid Forum (GGF) and others released in 2003. Announced by the Globus Alliance and IBM (with contributions from HP, SAP, Akamai, Tibco and Sonic) in January 2004, WSRF is due to be implemented in version 4.0 of the open source Globus Toolkit for grid computing, as well as several commercial packages. It consists of several component specifications, including WS-Resource Properties, WS-ResourceLifetime, WS-ServiceGroup and WS-BaseFaults.
          Similarity-Aware Query Processing and Optimization   

Many application scenarios, e.g., marketing analysis, sensor networks, and
medical and biological applications, require or can significantly benefit from the
identification and processing of similarities in the data. Even though some work
has been done to extend the semantics of some operators, e.g., join and
selection, to be aware of data similarities; there has not been much study on the
role, interaction, and implementation of similarity-aware operations as first-class
database operators. The focus of this thesis work is the proposal and study of
several similarity-aware database operators and a systematic analysis of their
role as query operators, interactions, optimizations, and implementation
techniques. This work presents a detailed study of two core similarity-aware
operators: Similarity Group-by and Similarity Join. We describe multiple
optimization techniques for the introduced operators. Specifically, we present: (1)
multiple non-trivial equivalence rules that enable similarity query transformations,
(2) Eager and Lazy aggregation transformations for Similarity Group-by and
Similarity Join to allow pre-aggregation before potentially expensive joins, and (3)
techniques to use materialized views to answer similarity-based queries. We also
present the main guidelines to implement the presented operators as integral
components of a database system query engine and several key performance
evaluation results of this implementation in an open source database system. We
introduce a comprehensive conceptual evaluation model for similarity queries
with multiple similarity-aware predicates, i.e., Similarity Selection, Similarity Join,
Similarity Group-by. This model clearly defines the expected correct result of a
query with multiple similarity-aware predicates. Furthermore, we present multiple
transformation rules to transform the initial evaluation plan into more efficient
equivalent plans.

          Bee-Eye (Bing Image Viewer)   
An open source WPF app that shows the result of Bing searches, and demos multi-touch and Windows 7 jumplist.
          GMap.NET - Great Maps for Windows Forms and Presentation   
An open source control for displaying maps and related information from various providers like Google, Bing, etc.
          The Ordinary Differential Equations Project   
An open source textbook designed to teach ordinary differential equations to undergraduates. The books strengths will include a wide range of exercises, both computational and theoretical, plus many nontrivial applications.
          The State of PLM Keynote at CONTACT Software Open World   

Earlier today, I was honored to be a keynote speaker at CONTACT Software Open World 2017 user event in Fulda, Germany. Manufacturing industry is going through the transformation process, which is the outcome of many industry and technological trends – cloud, IoT, agile, open source, AI and machine learning. In...

The post The State of PLM Keynote at CONTACT Software Open World appeared first on Beyond PLM (Product Lifecycle Management) Blog.

          Eclipse gets ready for Java 9 with Oxygen release train   

The Eclipse Foundation’s annual release train, featuring simultaneous updates to dozens of projects, has just arrived, featuring preliminary Java 9 support. Called Oxygen, the release train covers 83 projects and includes 71 million lines of code.

Here are the key updates in Oxygen

  • Java 9 support remains in beta stage, because Java 9 itself will not be made available until Java Development Kit 9 ships on September 21. Oxygen’s Java 9 support includes the ability to add the Java Runtime Environment for Java 9 as the installed JRE as well as backing for the Java 9 execution environment. Developers also can create Java and plug-in projects using Java 9 and compile modules that are part of a Java project. Eclipse’s signature Java IDE has been enhanced as well, with improvements to the UI.
  • Eclipse Linux Tools 6.0 updates Docker Tools with more security options. This project provides a C/C++ IDE for Linux developers.
  • Eclipse PDT (PHP Development Tools) 5.0 supports the 7.1 version of PHP, which offers nullable types and a void return type.
  • The Eclipse Sirius 5.0 platform for building domain-specific modeling tools, with usability enhancements.
  • Eclipse EGit 4.8.0, offering performance and usability for the Java implementation of Git code management integration for Eclipse.

Focused on open source tools, Eclipse has offered annual release trains every June since 2006, letting developers coordinate upgrades or new releases of multiple projects. Last year’s release train, Neon, offered tools for Docker and JavaScript. June 2018’s release is slated to be called Neon.

To read this article in full or to leave a comment, please click here

          Sales Specialist - Cloud Management   
VIC-Melbourne CBD, At Red Hat, we connect an innovative community of customers, partners, and contributors to deliver an open source stack of trusted, high-performing solutions. We offer cloud, Linux, middleware, storage, and virtualization technologies, together with award-winning global customer support, consulting, and implementation services. Red Hat is a rapidly growing company supporting more than 90% of Fortu
          Software: PhockUp, Terminus, Weblate, PiCluster, FreeDOS, LibreOffice, Jio Cinema, and (GNU) GRUB   
  • PhockUp is a Clever CLI Tool To Organize Photos by Date

    Phockup is a simple, straightforward, command line tool for sorting photos into folders based on date. It's an ideal tool for making organized backups.

  • Terminus is modern, highly configurable terminal app for Windows, Mac and Linux

    Hands up if use GNOME Terminal as your default terminal on Ubuntu? That’s a lot of hands. GNOME Terminal is great. It’s fast, featured, and straightforward. But it doesn’t hurt to try a few alternatives to it from time to time. Be it the vintage chic of retro term or the modern minimalism of Hyper.

  • Weblate 2.15

    Weblate 2.15 has been released today. It is slightly behind schedule what was mostly caused by my vacation. As with 2.14, there are quite a lot of security improvements based on reports we got from HackerOne program and various new features.

  • [Old] Why Use Package Managers?


    Fortunately, the vast majority of all open source software installs can be made trivial for anyone to do for themselves.  Modern package managers perform all the same steps as a caveman install, but automatically.  Package managers also install dependencies for us automatically.


    The pkgsrc package manager is unique in that it fully supports most POSIX compatible (Unix-like) operating systems.

  • What’s new in PiCluster 1.9

    PiCluster is a great platform to manage and orchestrate Docker containers.  Although it started as a way to manage my Raspberry Pi’s,   it can be run on any operating system that supports Node.js and Docker.  PiCluster has been under heavy development lately and I like to share what is new in v1.9.

  • 4 cool facts you should know about FreeDOS

    In the early 1990s, I was a DOS "power user." I used DOS for everything and even wrote my own tools to extend the DOS command line. Sure, we had Microsoft Windows, but if you remember what computing looked like at the time, Windows 3.1 was not that great. I preferred working in DOS.

  • LibreOffice Mascot competition
  • Jio Cinema app now runs on Samsung Tizen TV

    Over the years, Samsung Electronics has unveiled a lot of Tizen-powered devices, many of which have received positive reviews. Two years ago, Samsung decided to start shipping Tizen on all of its upcoming Smart TVs as part of a bid to boost Tizen TV ecosystem. Since then, we have seen the likes of the SUHD TV line which was unveiled at CES 2016, Las Vegas, an event in which Samsung released a total of 49 TVs at the same time. Now, to further boost the popularity of Samsung-Tizen TV, Jio Cinema has been added to its Tizen TVs.

  • d2k17 hackathon report: Martin Pieuchot on moving the network stack out of the big lock

    I came to unlock the forwarding path and thanks to the multiple reviews from bluhm@, sashan@ and claudio@ it happened! It started as a boring hackathon because I had to review and fix all the abuses of splnet() in pseudo drivers but then it went very smoothly. I still haven't seen a bug report about the unlock and Hrvoje Popovski even reported a 20% forwarding performance increase.

  • GRUB Now Supports EXT4 File-Systems With Encryption

    The GRUB bootloader now supports file-systems making use of EXT4 file-system encryption but where the boot files are left unencrypted.

          The Octinct Package   
Missing until now has been pieces of the Octinct puzzle, my control shield along with Jonathan's graciously allowing the buttonpads to be open sourced in an arduinome/monome style license.

Without further adieu my control shield:
Other Octinct files by Guberman:
Other Octinct files by Devon Jones:
  • Octinct Python Router (much faster and efficient, also has much improved and commented version of firmware, however i have not tested the hardware with it myself.)
I should add,

I'm using the older firmware and the old processing router in my examples.

The updated firmware and python router are much much better, the newer firmware has wonderful commenting. The firmware in the 'old' folder on evilsoft's site, and the 'old' firmware I'm using differ in a few ways:
  1. The port manipulation for the transistor bases have had some ports changed due to a mistake when creating the shield, i was assuming the analog ports were numbered incrementally and not decrementally, so I modified the firmware to account for this.
  2. The led remap has been changed
If you try migrating to the newer firmware which you should, you will most likely have to repeat these changes.

The Octinct is free for non-commercial purposes, with the understanding that you will have the pcb's made domestically wherever that may be in the spirit of
          How to Install Screenshot Sharing App ‘ScreenCloud’ in Ubuntu   
ScreenCloud is an open-source screenshot sharing application consists of a cross-platform client and a sharing website. With plugins, the app also supports uploading to other online services, e.g., FTP server, Imgur, and Dropbox. ScreenCloud Features: Open Source and cross-platform(Windows, Mac & Linux) Fast and easy: Snap a photo, paste the link, done! Plugin support, save […]
          RIP Chrome apps    
Update: Well, that was prescient.

At least once a day, I log into the Chrome Web Store dashboard to check on support requests and see how many users I've still got. Caret has held steady for the last year or so at about 150,000 active users, give or take ten thousand, and the support and feature requests have settled into a predictable rut:

  • People who can't run Caret because their version of Chrome is too old, and I've started using new ES6 features that aren't supported six browser versions back.
  • People who want split-screen support, and are out of luck barring a major rewrite.
  • People who don't like the built-in search/replace functionality, which makes sense, because it's honestly pretty terrible.
  • People who don't like the icons, and are just going to have to get over it.

In a few cases, however, users have more interesting questions about the fundamental capabilies of developer tooling, like file system monitoring or plugging into the OS in a deeper way. And there I have bad news, because as far as I can tell, Chrome apps are no longer actively developed by the Chromium team at all, and probably never will be again.

I don't think Chrome apps are going away immediately — they're still useful and used by a lot of third-party companies — but it's pretty clear from the dev side of things that Google's heart isn't in it anymore. New APIs have ceased to roll out, and apps don't get much play at conferences. The new party line is all about progressive web apps, with browser extensions for the few cases where you need more capabilities.

Now, progressive web apps are great, and anything that moves offline applications away from a single browser and out to the wider web is a good thing. But the fact remains that while a large number of Chrome apps can become PWAs with little fuss, Caret can't. Because it interacts with the filesystem so heavily, in a way that assumes a broader ecosystem of file-based tools (like Git or Node), there's actually no path forward for it using browser-only APIs. As such, it's an interesting litmus test for just how far web apps can actually reach — not, as some people have wrongly assumed, because there's an inherent performance penalty on the web, but because of fundamental limits in the security model of the browser.

Bounding boxes

What's considered "possible" for a web app in, say, 2020? It may be easier to talk about what isn't possible, which avoids the judgment call on what is "suitable." For example, it's a safe bet that the following capabilities won't ever be added to the web, even though they've been hotly debated in and out of standards committees for years:

  • Read/write file access (died when the W3C pulled the plug on the Directories part of the Filesystem API)
  • Non-HTTP sockets and networking (an endless number of reasons, but mostly "routers are awful")

There are also a bunch of APIs that are in experimental stages, but which I seriously doubt will see stable deployment in multiple browsers, such as:

  • Web Bluetooth (enormous security and usability issues)
  • Web USB (same as Bluetooth, but with added attacks from the physical connection)
  • Battery status (privacy concerns)
  • Web MIDI

It's tough to get worked up about a lot of the initiatives in the second list, which mostly read as a bad case of mobile envy. There are good reasons not to let a web page have drive-by access to hardware, and who's hooking up a MIDI keyboard to a browser anyway? The physical web is a better answer to most of these problems.

When you look at both lists together, one thing is clear: Chrome apps have clearly been a testing ground for web features. Almost all the not-to-be-implemented web APIs have counterparts in Chrome apps. And in the end, the web did learn from it — mainly that even in a sandboxed, locked-down, centrally distributed environment, giving developers that much power with so little install friction could be really dangerous. Rogue extensions and apps are a serious problem for Chrome, as I can attest: about once a week, shady people e-mail me to ask if they can purchase Caret. They don't explicitly say that they're going to use it to distribute malware and takeover ads, but the subtext is pretty clear.

The great thing about the web is that it can run code without any installation step, but that's also the worst thing about it. Even as a huge fan of the platform, the idea that any of the uncountable pages I visit in any given week could access USB directly is pretty chilling, especially when combined with exploits for devices that are plugged in, like hacking a phone (a nice twist on the drive-by jailbreak of iOS 4). Access to the file system opens up an even bigger can of worms.

Basically, all the things that we want as developers are probably too dangerous to hand out to the web. I wish that weren't true, but it is.

Untrusted computing

Let's assume that all of the above is true, and the web can't safely expand for developer tools. You can still build powerful apps in a browser, they just have to be supported by a server. For example, you can use a service like Cloud 9 (now an AWS subsidiary) to work on a hosted VM. This is the revival of the thick-client model: offline capabilities in a pinch, but ultimately you're still going to need an internet connection to get work done.

In this vision, we are leaning more on the browser sandbox: creating a two-tier system with the web as a client runtime, and a native tier for more trust on the local machine. But is that true? Can the web be made safe? Is it safe now? The answer is, at best, "it depends." Every third-party embed or script exposes your users to risk — if you use an ad network, you don't have any real idea who could be reading their auth cookies or tracking their movements. The miracle of the web isn't that it is safe, it's that it manages to be useful despite how rampantly unsafe its defaults are.

So along with the shift back to thick clients has come a change in the browser vendors' attitude toward powerful API features. For example, you can no longer use geolocation or the camera/microphone in Chrome on pages that aren't served over HTTPS, with other browsers to follow. Safari already disallows third-party cookie access as a general rule. New APIs, like Service Worker, require HTTPS. And I don't think it's hard to imagine a world where an API also requires a strict Content Security Policy that bans third-party embeds altogether (another place where Chrome apps led the way).

The packaged app security model was that if you put these safeguards into place and verified the package contents, you could trust the code to access additional capabilities. But trusting the client was a mistake when people were writing Quakebots, and it stayed a mistake in the browser. In the new model, those controls are the minimum just to keep what you had. Anything extra that lives solely on the client is going to face a serious uphill battle.

Mind the gap

The longer that I work on Caret, the less I'm upset by the idea that its days are numbered. Working on a moderately-successful open source project is exhausting: people have no problems making demands, sending in random changes, or asking the same questions over and over again. It's like having a second boss, but one that doesn't pay me or offer me any opportunities for advancement. It's good for exposure, but people die from exposure.

The one regret that I will have is the loss of Caret's educational value. Since its early days, there's been a small but steady stream of e-mail from teachers who are using it in classrooms, both because Chromebooks are huge in education and because Caret provides a pretty good editor with almost no fuss (you don't even have to be signed in). If you're a student, or poor, or a poor student, it's a pretty good starter option, with no real competition for its market niche.

There are alternatives, but they tend to be online-only (like Mozilla's Thimble) or they're not Chromebook friendly (Atom) or they're completely unacceptable in a just world (Vim). And for that reason alone, I hope Chrome keeps packaged apps around, even if they refuse to spend any time improving the infrastructure. Google's not great at end-of-life maintenance, but there are a lot of people counting on this weird little ecosystem they've enabled. It would be a shame to let that die.

          Daniel Pocock: A FOSScamp by the beach   

I recently wrote about the great experience many of us had visiting OSCAL in Tirana. Open Labs is doing a great job promoting free, open source software there.

They are now involved in organizing another event at the end of the summer, FOSScamp in Syros, Greece.

Looking beyond the promise of sun and beach, FOSScamp is also just a few weeks ahead of the Outreachy selection deadline so anybody who wants to meet potential candidates in person may find this event helpful.

If anybody wants to discuss the possibilities for involvement in the event then the best place to do that may be on the Open Labs forum topic.

What will tomorrow's leaders look like?

While watching a talk by Joni Baboci, head of Tirana's planning department, I was pleasantly surprised to see this photo of Open Labs board members attending the town hall for the signing of an open data agreement:

It's great to see people finding ways to share the principles of technological freedoms far and wide and it will be interesting to see how this relationship with their town hall grows in the future.

          Mozilla Open Innovation Team: A Time for Action — Innovating for Diversity & Inclusion in Open Source Communities   
Photo credit: cyberdee via Visual Hunt / CC BY-NC-SA

Another year, another press story letting us know Open Source has a diversity problem. But this isn’t news — women, people of color, parents, non-technical contributors, cs/transgender and other marginalized people and allies have been sharing stories of challenge and overcoming for years. It’s can’t be enough to count who makes it through the gauntlet of tasks and exclusive cultural norms that lead to a first pull request; it’s not enough to celebrate increased diversity on stage at technical conferences — when audience remains homogeneous, and abuse goes unchallenged.

Open source is missing out on diverse perspectives, and experiences that can drive change for a better world because we’re stuck in our ways — continually leaning on long-held assumptions about why we lose people. At Mozilla, we believe that to truly influence positive change in Diversity & Inclusion in our communities, and more broadly in open source, we need to learn, empathize —and innovate. We’re committed building on the good work of our peers to further grow through action — building bridges and collaborating with other communities also investing in D&I.

This year, leading with our organizational strategy for D&I, we are in investing in our communities informed by three months of research. Qualitative research was conducted across the globe, with over 85 interviews as either part of an identity or focus groups, including interviews in the first language of participants, and for areas of low-bandwidth(or those who preferred not to speak on video) we interviewed in Telegram.

Qualitative data was analyzed from various sources including Mozilla Reps portal, Mozillian Sentiment Survey, a series of applications to Global Leadership events, regional meetups, a regional community survey, and various smaller data sources.

For five weeks, beginning July 3rd, this blog series will share key findings — challenges, and experiments we’re investing in for the remainder of the year and into next. As part of this, we intend to build bridges between our work and other open source communities research and work. At the end of this series we’ll post a link to schedule a presentation of this work to your community for input and future collaboration.


A Time for Action — Innovating for Diversity & Inclusion in Open Source Communities was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

          Emma Irwin: A Time for Action — Innovating for Diversity & Inclusion in Open Source Communities   

Cross-posted to our Open Innovation Blog

Another year, another press story letting us know Open Source has a diversity problem. But this isn’t news — women, people of color, parents, non-technical contributors, cs/transgender and other marginalized people and allies have been sharing stories of challenge and overcoming for years. It’s can’t be enough to count who makes it through the gauntlet of tasks and exclusive cultural norms that lead to a first pull request; it’s not enough to celebrate increased diversity on stage at technical conferences — when audience remains homogeneous, and abuse goes unchallenged.

Open source is missing out on diverse perspectives, and experiences that can drive change for a better world because we’re stuck in our ways — continually leaning on long-held assumptions about why, why we lose people. At Mozilla, we believe that to truly influence positive change in Diversity & Inclusion in our communities, and more broadly in open source, we need to learn, empathize —and innovate. We’re committed building on the good work of our peers to further grow through action — building bridges and collaborating with other communities also investing in D&I.

This year, leading with our organizational strategy for D&I, we are in investing in our communities informed by three months of research. Qualitative research was conducted across the globe, with over 85 interviews as either part of an identity or focus groups, including interviews in the first language of participants, and for areas of low-bandwidth(or those who preferred not to speak on video) we interviewed in Telegram.


Qualitative data was analyzed from various sources including Mozilla Reps portal, Mozillian Sentiment Survey, a series of applications to Global Leadership events, regional meetups, a regional community survey, and various smaller data sources.

For five weeks, beginning July 3rd, this blog series will share key findings — challenges, and experiments we’re investing in for the remainder of the year and into next. As part of this, we intend to build bridges between our work and other open source communities research and work. At the end of this series we’ll post a link to schedule a presentation of this work to your community for input and future collaboration.

Cross-posted to our Open Innovation Blog

Feature Image  Photo credit: cyberdee via Visual Hunt / CC BY-NC-SA


          Hacks.Mozilla.Org: Introducing HumbleNet: a cross-platform networking library that works in the browser   

HumbleNet started out as a project at Humble Bundle in 2015 to support an initiative to port peer-to-peer multiplayer games at first to asm.js and now to WebAssembly. In 2016, Mozilla’s web games program identified the need to enable UDP (User Datagram Protocol) networking support for web games, and asked if they could work with Humble Bundle to release the project as open source. Humble Bundle graciously agreed, and Mozilla worked with to polish and document HumbleNet. Today we are releasing the 1.0 version of this library to the world!

Why another networking library?

When the idea of HumbleNet first emerged we knew we could use WebSockets to enable multiplayer gaming on the web. This approach would require us to either replace the entire protocol with WebSockets (the approach taken by the asm.js port of Quake 3), or to tunnel UDP traffic through a WebSocket connection to talk to a UDP-based server at a central location.

In order to work, both approaches require a middleman to handle all network traffic between all clients. WebSockets is good for games that require a reliable ordered communication channel, but real-time games require a lower latency solution. And most real-time games care more about receiving the most recent data than getting ALL of the data in order. WebRTC’s UDP-based data channel fills this need perfectly. HumbleNet provides an easy-to-use API wrapper around WebRTC that enables real-time UDP connections between clients using the WebRTC data channel.

What exactly is HumbleNet?

HumbleNet is a simple C API that wraps WebRTC and WebSockets and hides away all the platform differences between browser and non-browser platforms. The current version of the library exposes a simple peer-to-peer API that allows for basic peer discovery and the ability to easily send data (via WebRTC) to other peers. In this manner, you can build a game that runs on Linux, macOS, and Windows, while using any web browser — and they can all communicate in real-time via WebRTC.  This means no central server (except for peer discovery) is needed to handle network traffic for the game. The peers can talk directly to each other.

HumbleNet itself uses a single WebSocket connection to manage peer discovery. This connection only handles requests such as “let me authenticate with you”, and “what is the peer ID for a server named “bobs-game-server”, and “connect me to peer #2345”.  After the peer connection is established, the games communicate directly over WebRTC.

HumbleNet demos

We have integrated HumbleNet into asm.js ports of Quake 2 and Quake 3 and we provide  a simple Unity3D demo as well.

Here is a simple video of me playing Quake 3 against myself. One game running in Firefox 54 (general release), the other in Firefox Developer Edition.

Getting started

You can find pre-built redistributables at These include binaries for Linux, macOS, Windows, a C# wrapper, Unity3D plugin, and emscripten (for targeting asm.js or WebAssembly).

Starting your peer server

Read the documentation about the peer server on the website. In general, for local development, simply starting the peer server is good enough. By default it will run in non-SSL mode on port 8080.

Using the HumbleNet API

Initializing the library

To initialize HumbleNet just call humblenet_init() and then later humblnet_p2p_init(). The second call will initiate the connection to the peer server with the specified credentials.


// this initializes the P2P portion of the library connecting to the given peer server with the game token/secret (used by the peer server to validate the client).
// the 4th parameter is for future use to authenticate the user with the peer server

humblenet_p2p_init("ws://localhost:8080/ws", "game token", "game secret", NULL);
Getting your local peer id

Before you can send any data to other peers, you need to know what your own peer ID is. This can be done by periodically polling the humblenet_p2p_get_my_peer_id() function.

// initialization loop (getting a peer)
static PeerId myPeer = 0;

while (myPeer == 0) {
  // allow the polling to run

  // fetch a peer
  myPeer = humblenet_p2p_get_my_peer_id();
Sending data

To send data, we call humblenet_p2p_sendto.  The 3rd parameter is the send mode type. Currently HumbleNet implements 2 modes:SEND_RELIABLE and SEND_RELIABLE_BUFFERED.   The buffered version will attempt to do local buffering of several small messages and send one larger message to the other peer. They will be broken apart on the other end transparently.

void send_message(PeerId peer, MessageType type, const char* text, int size)
  if (size > 255) {

  uint8_t buff[MAX_MESSAGE_SIZE];

  buff[0] = (uint8_t)type;
  buff[1] = (uint8_t)size;

  if (size > 0) {
    memcpy(buff + 2, text, size);

  humblenet_p2p_sendto(buff, size + 2, peer, SEND_RELIABLE, CHANNEL);
Initial connections to peers

When initially connecting to a peer for the first time you will have to send an initial message several times while the connection is established. The basic approach here is to send a hello message once a second, and wait for an acknowledge response before assuming the peer is connected. Thus, minimally, any application will need 3 message types: HELLO, ACK, and some kind of DATA message type.

if (newPeer.status == PeerStatus::CONNECTING) {
  time_t now = time(NULL);

  if (now > newPeer.lastHello) {
    // try once a second
    send_message(, MessageType::HELLO, "", 0);
    startPeerLastHello = now;
Retrieving data

To actually retrieve data that has been sent to your peer you need to use humblenet_p2p_peek and humblenet_p2p_recvfrom. If you assume that all packages are smaller than a max size, then a simple loop like this can be done to process any pending messages.  Note: Messages larger than your buffer size will be truncated. Using humblenet_p2p_peek you can see the size of the next message for the specified channel.

uint8_t buff[MAX_MESSAGE_SIZE];
bool done = false;

while (!done) {
  PeerId remotePeer = 0;

  int ret = humblenet_p2p_recvfrom(buff, sizeof(buff), &remotePeer, CHANNEL);

  if (ret < 0) {
    if (remotePeer != 0) {
      // disconnected client
    } else {
      // error
      done = true;
  } else if (ret > 0) {
    // we received data process it
    process_message(remotePeer, buff, sizeof(buff), ret);
  } else {
    // 0 return value means no more data to read
    done = true;
Shutting down the library

To disconnect from the peer server, other clients, and shut down the library, simply call humblenet_shutdown.

Finding other peers

HumbleNet currently provides a simple “DNS” like method of locating other peers.  To use this you simply register a name with a client, and then create a virtual peer on the other clients. Take the client-server style approach of Quake3 for example – and have your server register its name as “awesome42.”


Then, on your other peers, create a virtual peer for awesome42.

PeerID serverPeer = humblenet_p2p_virtual_peer_for_alias("awesome42");

Now the client can send data to serverPeer and HumbleNet will take care of translating the virtual peer to the actual peer once it resolves the name.

We have two systems on the roadmap that will improve the peer discovery system.  One is an event system that allows you to request a peer to be resolved, and then notifies you when it’s resolved. The second is a proper lobby system that allows you to create, search, and join lobbies as a more generic means of finding open games without needing to know any name up front.

Development Roadmap

We have a roadmap of what we plan on adding now that the project is released. Keep an eye on the HumbleNet site for the latest development.

Future work items include:

  1. Event API
    1. Allows a simple SDL2-style polling event system so that game code can easily check for various events from the peer server in a cleaner way, such as connects, disconnects, etc.
  2. Lobby API
    1. Uses the Event API to build a means of creating lobbies on the peer server in order to locate game sessions (instead of having to register aliases).
  3. WebSocket API
    1. Adds in support to easily connect to any websocket server with a clean simple API.

How can I contribute?

If you want to help out and contribute to the project, HumbleNet is being developed on GitHub: Use the issue tracker and pull requests to contribute code. Be sure to read the guide on how to create a pull request.

          [tech] A geek Dad goes to Kindergarten with a box full of Open Source and some vegetables   

Zoe's Kindergarten encourages parents to come in and spend some time with the kids. I've heard reports of other parents coming in and doing baking with the kids or other activities at various times throughout the year.

Zoe and I had both wanted me to come in for something, but it had taken me until the last few weeks of the year to get my act together and do something.

I'd thought about coming in and doing some baking, but that seemed rather done to death already, and it's not like baking is really my thing, so I thought I'd do something technological. I just wracked my brains for something low effort and Kindergarten-age friendly.

The Kindergarten has a couple of eduss touch screens. They're just some sort of large-screen with a bunch of inputs and outputs on them. I think the Kindergarten mostly uses them for showing DVDs and hooking up a laptop and possibly doing something interactive on them.

As they had HDMI input, and my Raspberry Pi had HDMI output, it seemed like a no-brainer to do something using the Raspberry Pi. I also thought hooking up the MaKey MaKey to it would make for a more fun experience. I just needed to actually have it all do something, and that's where I hit a bit of a creative brick wall.

I thought I'd just hack something together where based on different inputs on the MaKey MaKey, a picture would get displayed and a sound played. Nothing fancy at all. I really struggled to get a picture displayed full screen in a time efficient manner. My Pi was running Raspbian, so it was relatively simple to configure LightDM to auto-login and auto-start something. I used triggerhappy to invoke a shell script, which took care of playing a sound and an image.

Playing a sound was easy. Displaying an image less so, especially if I wanted the image loaded fast. I really wanted to avoid having to execute an image viewer every time an input fired, because that would be just way too slow. I thought I'd found a suitable application in Geeqie, because it supported being out of band managed, but it's problem was it also responded to the inputs from the MaKey MaKey, so it became impossible to predictably display the right image with the right input.

So the night before I was supposed to go to Kindergarten, I was up beating my head against it, and decided to scrap it and go back to the drawing board. I was looking around for a Kindergarten-friendly game that used just the arrow keys, and I remembered the trusty old Frozen Bubble.

This ended up being absolutely perfect. It had enough flags to control automatic startup, so I could kick it straight into a dumbed-down full screen 1 player game (--fullscreen --solo --no-time-limit)

The kids absolutely loved it. They were cycled through in groups of four and all took turns having a little play. I brought a couple of heads of broccoli, a zucchini and a potato with me. I started out using the two broccoli as left and right and the zucchini to fire, but as it turns out, not all the kids were as good with the "left" and "right" as Zoe, so I swapped one of the broccoli for a potato and that made things a bit less ambiguous.

The responses from the kids were varied. Quite a few clearly had their minds blown and wanted to know how the broccoli was controlling something on the screen. Not all of them got the hang of the game play, but a lot did. Some picked it up after having a play and then watching other kids play and then came back for a more successful second attempt. Some weren't even sure what a zucchini was.

Overall, it was a very successful activity, and I'm glad I switched to Frozen Bubble, because what I'd originally had wouldn't have held up to the way the kids were using it. There was a lot of long holding/touching of the vegetables, which would have fired hundreds of repeat events, and just totally overwhelmed triggerhappy. Quite a few kids wanted to pick up and hold the vegetables instead of just touch them to send an event. As it was, the Pi struggled to play Frozen Bubble enough as it was.

The other lesson I learned pretty quickly was that an aluminium BBQ tray worked a lot better as the grounding point for the MaKey MaKey than having to tether an anti-static strap around each kid's ankle as they sat down in front of the screen. Once I switched to the tray, I could rotate kids through the activity much faster.

I just wish I was a bit more creative, or there were more Kindergarten-friendly arrow-key driven Linux applications out there, but I was happy with what I managed to hack together with a fairly minimal amount of effort.

          Kommentar zu System76 hat eigene Distribution angekündigt: Pop!_OS – selbst angetestet von Al CiD   
Guck mal hier, da hat man sich zu geäußert, nachdem Ikey (Solus) auch mal kurz darauf hingewiesen hat. Besonders seine (Ryan Sipes - System76) Antwort: +Ikey Doherty That's good feedback. We are going to throw up a roadmap soon that will cover more things we plan on tackling. I've shared this with engineering. We waited 3 years on Ubuntu to bring features we were waiting on to the desktop (Unity 8, anyone?). We have ideas for where we are going and want to focus on that. All our work will be open source and can be adopted upstream. But, you are right, we should be doing this stuff in the best way possible and not reinventing the wheel.
          Comment on Oracle open sources new tools focused on containers by Oracle open sources new tools focused on containers - Khabri No 1   
[…] post Oracle open sources new tools focused on containers appeared first on SD […]
          Eclipse gets ready for Java 9 with Oxygen release train   

The Eclipse Foundation’s annual release train, featuring simultaneous updates to dozens of projects, has just arrived, featuring preliminary Java 9 support. Called Oxygen, the release train covers 83 projects and includes 71 million lines of code.

Here are the key updates in Oxygen

  • Java 9 support remains in beta stage, because Java 9 itself will not be made available until Java Development Kit 9 ships on September 21. Oxygen’s Java 9 support includes the ability to add the Java Runtime Environment for Java 9 as the installed JRE as well as backing for the Java 9 execution environment. Developers also can create Java and plug-in projects using Java 9 and compile modules that are part of a Java project. Eclipse’s signature Java IDE has been enhanced as well, with improvements to the UI.
  • Eclipse Linux Tools 6.0 updates Docker Tools with more security options. This project provides a C/C++ IDE for Linux developers.
  • Eclipse PDT (PHP Development Tools) 5.0 supports the 7.1 version of PHP, which offers nullable types and a void return type.
  • The Eclipse Sirius 5.0 platform for building domain-specific modeling tools, with usability enhancements.
  • Eclipse EGit 4.8.0, offering performance and usability for the Java implementation of Git code management integration for Eclipse.

Focused on open source tools, Eclipse has offered annual release trains every June since 2006, letting developers coordinate upgrades or new releases of multiple projects. Last year’s release train, Neon, offered tools for Docker and JavaScript. June 2018’s release is slated to be called Neon.

To read this article in full or to leave a comment, please click here

IDC: Server shipments slow on spread of virtualization:  "Growth in the x86 server market revved slightly in Q4 2006, growing 7.0% in the quarter to $7.2 billion worldwide, its fastest growth rate in five quarters, but unit shipment growth continued to moderate with growth at 1.1% year over year, to 1.85 million servers as customers continued to consolidate their IT infrastructures, .. "For the first time in more than 10 years, average selling values in the quarter increased year over year as IT managers move to consolidate IT workloads. This shift toward a shared compute infrastructure is driving additional scalability, memory attachment and I/O needs, which in turn, lead to higher average selling values." ..

Microsoft Windows servers .. revenue grew 9.4% and unit shipments grew 5.1% year over year. Quarterly revenue of $5.3 billion for Windows servers represented 34.9% of overall quarterly factory revenue, the single largest revenue segment in the server market, IDC reported.

After two consecutive quarters of single-digit revenue growth, Linux server revenue growth accelerated once again, growing 15.3% to $1.8 billion when compared with Q4 2005. Linux servers now represent 11.9% of all server revenue, up more than one point over Q4 2005. But Linux server shipments declined 0.8% year over year after 18 quarters of double-digit shipment growth, as IT consolidation extends its reach into the open source domain...

Unix servers experienced 2.8% revenue growth year over year when compared with Q4 2006. Worldwide Unix revenues were $5.1 billion for the quarter, representing 33.5% of quarterly server spending."  Itanium, z/OS and blades sold about $3.5B combined."

How To Tell The Open Source Winners From The Losers: A 9-point checklist for evaluating open source solutions:
  1. "A thriving community: A handful of lead developers, a large body of contributors, and a substantial--or at least motivated--user group offering ideas.
  2. Disruptive goals:Does something notably better than commercial code. Free isn't enough.
  3. A benevolent dictator: Leader who can inspire and guide developers, asking the right questions and letting only the right code in.
  4. Transparency: Decisions are made openly, with threads of discussion, active mailing list, and negative and positive comments aired.
  5. Civility: Strong forums police against personal attacks or niggling issues, focus on big goals.
  6. Documentation: What good's a project that can't be implemented by those outside its development?
  7. Employed developers: The key developers need to work on it full time.
  8. A clear license: Some are very business friendly, others clear as mud.
  9. Commercial support: Companies need more than e-mail support from volunteers. Is there a solid company employing people you can call? "

Global Voices Online:  Interesting compilation of current blog material from citizens of many counties, including Lebanon, Libya, China, Iran, with coverage of local news.  Would provide interesting inputs to the "open source intelligence" movement.

          Introducing Website Speed Test: An Image Analysis Tool   

Introducing Website Speed Test: An Image Analysis Tool

This article was sponsored by Cloudinary. Thank you for supporting the partners who make SitePoint possible.

Because images dominate page weight, methodical image optimization can have a significant effect on conversions and user experience. The performance tools you choose to use can have a powerful impact on how websites are built and maintained. One such popular open source tool is WebPagetest. It is designed to measure and analyze webpage performance, which is why Cloudinary chose to partner with our friends there to launch Website Speed Test.

Website Speed Test is an image analysis tool that provides detailed optimization insights beyond a simple compression check. The tool leverages Cloudinary’s advanced algorithms to demonstrate how changes to image size, format, quality and encoding parameters can result in significant reductions in file size while maintaining perceived quality. In short, Website Speed Test shows the why and how of image optimization.

How Website Speed Test Works

Advanced algorithms take into account many factors when examining images, including the exact content of an image and the need for responsive design. The resulting insights enable you to ensure that images are encoded correctly, optimized for performance, and look their best regardless of bandwidth, viewing browser, device or viewport.

At the top of the page, the report shows the total weight of images, potential compression and ‘Page Image Score’: a grade ranging from A-F. This grade is based on the image format used, fit between image resolution and the displayed size in the graphic design, and compression rate of all the images that were analyzed.

Cloudinary Image Analysis Results

The overview is followed by a detailed analysis of each image, with performance insights and recommendations for improvement.

Left Tab – Current Image

Presents the current version of the image being analyzed along with its image score.

Middle Tab – Optimized Image

Presents an optimized version of the image, using the same format as the original image, with the following adjustments:

  • Correctly-sized images - scales the image down to the actual required dimensions on the web page
  • Intelligent content-aware encoding - analyzes the image to find the best quality compression level and optimal encoding settings, based on the content and viewing browser, producing a perceptually fine image while minimizing the file size.

Learn more about these manipulations

Right Tab - Format Alternatives

This tab shows how optimization works for different image formats and the impact on image weight.

Improved Image Analysis Using WebPagetest

Linked from a new Image Analysis tab, Cloudinary powers WebPagetest with robust image analysis capabilities, enabling you to receive valuable data and guidance on how to manage images and deliver an optimal user experience.

Optimizing Images is No Easy Task

The Website Speed Test tool provides insights on the why and how of optimization. While you may be able to optimize an image or two manually, the process becomes exponentially more complicated when you need to scale up, managing hundreds, thousands, or even millions of images delivered to a website.

For the best user experience, each image should be enhanced and optimized to meet the viewing context. This entails automatically adapting the image to fit the layout of the page and selecting the optimal quality and encoding settings.

Accomplishing this type of optimization is no ordinary feat. Optimizing images for different browsers, devices and bandwidth requires considerable knowledge of the intricacies of image formats, encoding parameters and visual quality metrics. For example, it makes sense that a smaller image file size will result in faster load time, less bandwidth usage and a better user experience. However, reduce the file size too much, and image quality could suffer and impair user satisfaction. This is where Cloudinary’s automatic optimization comes in play.

You can create your free account here.

Continue reading %Introducing Website Speed Test: An Image Analysis Tool%

          ZEN CAPITALISM   
Dear friend, What does Eric Kim belive in? ZEN CAPITALISM. Okay to start off, I think I’m the next Steve Jobs. I have been able to integrate street photography, zen, consumerism, capitalism, marketing, advertising, entrepreneurship, open source, free, and socialism. Which made me realize: I’m a Zen Capitalist. I stand on the shoulders of giants. […]
          Aprile 2017   
2017-04-29. Milano, Hotel Sheraton Malpensa. Partecipazione ad Astrocaffè (video).

2017-04-29/30. Milano, Hotel Sheraton Malpensa. Traduttore per l'astronauta lunare Charlie Duke.

2017-04-28. Radiotelevisione Svizzera, Rete Tre. Puntata n. 500 del Disinformatico.

2017-04-26. Radio Inblu. Servizio La Rete in tre minuti.

2017-04-21. Roma, Montecitorio. Moderazione del tavolo di lavoro dei mezzi d'informazione nell'ambito del progetto Bastabufale della Presidenza della Camera.

2017-04-21. Radiotelevisione Svizzera, Rete Tre. Puntata n. 499 del Disinformatico.

2017-04-19. RSI La1. Partecipazione a Cuochi d'artificio (video).

2017-04-19. Radio Inblu. Servizio La Rete in tre minuti.

2017-04-14. Radiotelevisione Svizzera, Rete Tre. Puntata n. 498 del Disinformatico.

2017-04-12. Radio Inblu. Servizio La Rete in tre minuti.

2017-04-11. Novara, auditorium del Liceo Scienze Umane. Conferenza dedicata ai genitori (specialmente di bambini tra i 2 e 10 anni) sull'uso (e abuso) di tablet e smartphone. 

2017-04-11. Borgomanero, auditorium del Liceo Scientifico. Due conferenza  Amore a tre: io, te e lo smartphone (dedicata agli studenti).

2017-04-10. Borgomanero, auditorium del Liceo Scientifico. Conferenza Sessualità, sentimenti e smartphone (dedicata agli adulti).

2017-04-07. Chiavari. Serata per i membri del Rotary Club dedicata alle tesi di complotto intorno agli sbarchi sulla Luna.

2017-04-07. Radiotelevisione Svizzera, Rete Tre. Puntata n. 497 del Disinformatico.

2017-04-06. Vacallo. Conferenza per genitori dedicata ai rischi dell'abuso di Internet per minori, insieme alla Polizia Cantonale. 

2017-04-05. Biasca, scuole medie. Lezione Open source: software aperto per riprendersi il computer nell'ambito dei corsi per adulti del Cantone.

2017-04-05. Radio Inblu. Servizio La Rete in tre minuti.

2017-04-04. Vacallo. Conferenza per genitori dedicata ai rischi dell'abuso di Internet per minori, insieme alla Polizia Cantonale.

2017-04-04. Biasca. Lezione Gli inganni della mente per gli studenti delle scuole medie.

2017-04-03. Lugano, Liceo 2. Partecipazione all’autogestione del Liceo con la conferenza: Nufologia: come evitare gli inganni dei falsi UFO.
          Java Software Engineers Needed - (Bedford)   
Java Software Engineers NeededLocation: Bedford, MANo relocation assistance providedUS Citizens, GC and EAD encouraged to apply.We currently have multiple fulltime/perm Java Software Engineer positions available for our direct client in Bedford, MA location. As a Java Software Engineer you'll:Design and develop key components of our client's enterprise software productsEngineer and implement new product featuresCollaborate with Product Management, Quality Assurance, Customer Support and Implementation Services to develop and deliver high-quality product features in a timely mannerReview and provide input to feature functional designs and technical specificationsDeliver high-quality and efficient codeDevelop and execute unit and integration testsPerform code reviews and provide appropriate feedbackResolve reported defectsReview test casesActively participate in all aspects of an agile environmentContribute to continually evolving software excellence Requirements include a Bachelor's degree or equivalent in Computer Science, Management Information Sciences or related field and three years of work experience in the job offered or related field of complex enterprise software development; or a Master's degree or equivalent and one year of pre- or post-degree work experience. Additional requirements includeStrong background with Hibernate and SpringStrong background with other contemporary Java technologies and with relational databases, including SQL Server.Thorough knowledge of unit and regression testing and code reviews.Strong background with Java/J2EE development, object-oriented analysis and design, service-oriented architectures using synchronous Web services and asynchronous messaging, Java application servers, and open source JUnit and Jasper Reporting technologies.Strong verbal and written communication skills.Healthcare experience is a strong plus.Applicants must have unrestricted authorization to work in the United States.Java/J2EE, SQL Server, Web Services, JUnit, Jasper
          Implement Opensource R Package in Matlab by luongjames   
There are two open source R packages that I use that I want to use natively within MATLAB. I need them translated and implemented in MATLAB. Blotter Package (Budget: $250 - $750 SGD, Jobs: Matlab and Mathematica)
          Implement Opensource R Package in Matlab by luongjames   
There are two open source R packages that I use that I want to use natively within MATLAB. I need them translated and implemented in MATLAB. Blotter Package (Budget: $250 - $750 SGD, Jobs: Matlab and Mathematica)
          Contributing to OSS projects made easy   

I recently came across what I believe is a missing feature (bug?) in Json.NET most excellent library: when using custom constructor, default values are not populated for properties :(

Being open source, I just went to its GitHub project, created the mentioned Issue, and proceeded to fork the repo.

Immediately after getting the source and opening the VS solution, I noticed it used different settings than my default ones, i.e. it uses two whitespace “tabs”. That’s typically the first barrier to contributing: am I supposed to now go to the project website, read some coding standards page, change my VS settings to match, just to start coding?

Luckily, I also noticed alongside the solution, the Newtonsoft.Json.vssettings file. Now that’s useful! Not only that, but by being named the same as the solution file (just changing the file extension), it’s automatically picked up by the  ...

Read full article

          PayPal Samples are virtually useless   

I’m investigating the PayPal Adaptive Payments for a project, and was gladly surprised to find not only the sample app but also the SDK itself open sourced at GitHub. Cool!

My excitement ended abruptly as I opened the sample app. It’s so unbelievably bad you have to see it to believe it. A few WTFs:

  1. Sample is essentially a bunch of .aspx pages with plain forms that allow you to fill all possible parameters
  2. Form uses postback to a **1600+** lines of code .ashx handler that uses the submit **BUTTON TEXT** to determine the operation to perform
  3. Then the handler goes on to manually parse the HttpContext.Request.Params collection one by one and building the PayPal API classes used in the requests/services


How does web apps this way these days?? Being a C# sample, I would have expected at least to have an MVC app, that potentially used nice model binders to automatically parse form input and invoke PayPal from that, and fill an EF code first database with the IPN callback data, etc....

Read full article

          A new way of financing open source software via Kickstarter   

The Kickstarter successes of the ambitious Ghost as well as the much more modest Schema Migrations for Django or git-annex (all open source software projects) got me thinking that this could be the start of a new way to fund open source projects in general.

The Django project in particular was solely to add a significant feature to an existing open source project. What if popular open source projects started creating Kickstarter projects to raise funds to push the projects forward? Say a major significant rewrite is needed in some area but nobody can afford a 2 week break from work  to complete it? Maybe it requires more than one dev? Maybe devs are from different parts of the world and if significant funds are raised, they could even afford to travel and work together on-site somewhere for 2-3 weeks?...

Read full article

          NuGet References: publishing my first open source extension to the DevStore   

Last week I had the pleasure of spending time with a bunch of friends at the OuterConf 2013, including pretty much the entire NuGet team. I also could attend to the hackathon they organized, and I got to hack what I think is a pretty cool Visual Studio 2012 extension: NuGet References.

An improved NuGet experience

The idea is simple enough: I wanted to leverage some new extensibility hooks in VS2012 to show installed nuget packages right inside a packages.config file, like so:


And once you have the nodes in there, wouldn’t it be cool to be able to update and uninstall right from there?


And why not allow me to see key information about the package on the properties window?


Cool enough Sonrisa

Monetizing your creations

Now, after the extension was “done” (there’s a TON more that can be added to it! This is just the start), I realized I had put quite a few hours (more like days by now!) into it. So even if I do want my work to be open source so that eventually it can make it into NuGet’s core tooling, I could certainly use a few bucks to pay for the coffee and beer I put into it ...

Read full article

          Implement Opensource R Package in Matlab by luongjames   
There are two open source R packages that I use that I want to use natively within MATLAB. I need them translated and implemented in MATLAB. Blotter Package (Budget: $250 - $750 SGD, Jobs: Matlab and Mathematica)
          TensorFlow: Google rilascia il suo software di intelligenza artificiale al mondo Open Source   

Google rilascia al mondo Open Source la sua piattaforma di Learning Machine (apprendimento automatico), che è al centro dell'intelligenza artificiale grazie alla quale le App sugli smartphone potranno presto svolgere funzioni fino ad oggi impossibili.

Autor: avatarbyoblu
Tags: Google Artificial intelligence Learning Machine Tensorflow Apprendimento automatico Intelligenza artificiale
gepostet: 10 November 2015

          Principal SDE Lead - Microsoft - Redmond, WA   
Experience with open source platforms like node.js, Ruby on Rails, the JVM ecosystem, the Hadoop ecosystem, data platforms like Postgres, MongoDB and Cassandra...
From Microsoft - Thu, 29 Jun 2017 10:48:18 GMT - View all Redmond, WA jobs
WGBH at 7PM Eastern -- the inaugural airing of Chris Lydon's Open Source. 89.7 FM. Guests are David Weinberger, Doc Searls and yours truly. If you're not in the Boston area, please tune in to the webcast.
          MRS 009 My Ruby Story Brian Hogan   

My Ruby Story 009 Brian Hogan

On this episode we have another My Ruby Story and there is a good chance you might recognize him, he is one of’s panelists Brian Hogan. Aside from being a panelists on Ruby Rouges, he also has a couple other projects like as well as

How did you get into programming?

Brain talks about how his Dad has an old Apple 2 computer. His father was a teacher for the blind and the computer had a box on it that would talk. His Dad taught him that computers can have programs written for them and make them do things. Brain talks about having math issues one evening and his Dad helped by making a math program that would quiz him. His Dad wasn’t a programmer but he had picked up some of it from being around it. Brain talks about how the library had games you could get for the Apple 2 but you had to write code into the computer to make it work. He started tweaking the code to learn that it adjusted things in the game like the speed of the spaceship or the damage of the bomb.

Brian’s First Program

Brian’s first program was in fourth grade. He had an assignment on the topic of the seas and instead of doing a typical handwritten assignment he created a program for it. He learned that he could make the computer do things. Over time Brian got interested in other things, planning to go to school for law. His Dad lost his job making his plans for law school unreachable without student loan debt. He started making money on the side repairing and building computers.

Computers solving problems

He talks about how he never really got into the computer science level of things, but he was always excited about being able to solve people’s problems with computers. He remembers getting internet for the first time. It was Netscape and it came with a book on how to setup the internet and then in the last chapter it had a section teaching how to make a webpage with HTML. He loved making websites and so he made pages for businesses and made money on the side. He went to college aiming for computer science and then when he got into classes like computational theory, he found that it was boring to him still. He changed his major to business. He then got a job working for the college working with website stuff. The developer for the pages ended up quitting and so they asked Brian to help out. So he learned Microsoft server SQL and ASP. He adds that essentially he fell into web development by accident. He talks about his code being bad until he learned Ruby, crediting Ruby with making object oriented programming easier to understand. Charles mentions that he felt the same way in school, it wasn’t until he needed to fix a real problem that programming really started to seem useful and fun. Brian talks about how he isn’t really the best programmer, but his strengths are helping other people to program. He has trained many people to program since then.

Learning with Context

He talks about in school how they throw JavaScript at you and teach you the higher concepts before understanding it. He tells about how doing something like teaching Git on the first day doesn’t make sense because the students don’t understand why they need it. He suggests that the thing that is missing from the curriculum is the real work connection. Majority of adults need to be able to connect what they are learning to something they have already learned. Context is important for learning.

How did you get into Ruby?

Brian talks about doing PHP for a while as well as ASP. He was working with a project as an Oracle DBA. They were moving from Java to an Oracle Database. But no one there knew what Java was and a person there named Bruce suggested that the work they were doing would be better written in Ruby. The team disagreed but afterward one day Brian was talking to Bruce about a side project he was working on and how he wasn’t accomplishing it the way he wanted to. Bruce asked him to get lunch with him. Brian then talks about how in life if someone very smart asks you to get lunch that you should drop everything and do it. In a single night he was able to accomplish everything he was trying to. He took his project to work the next day and they said they wouldn’t be able to use it on Windows. Brian started working on finding ways to deploy it, and that has been the starting point of Ruby for Brian. He went to Rails full time after that. Publishing an article on how to get it deployed. His work with Ruby led to him teaching and writing books. When he needs to make something heavily data driven he always reaches for Ruby. He isn’t interested in scalability because usually he is working on a small business process behind the firewall used by less than 100 people.

Framework Peer Pressure

Brian talks about the fear and pressure to use the latest and greatest frameworks in the development community. He talks about how the only people who know what framework a person uses is the developer and the peers. You don’t get paid to impress peers in the community. A developer gets paid to solve peoples problems. Charles and Brian add that using new frameworks are great and can teach you new ways to solve problems, but no matter how a person solves a problem, it should be celebrated. Learn new things but don’t make people feel bad for not doing things the same way you do them. Brian adds that another reason he likes Rails is that it has a lot of things that came from basecamp and it is a well developed and tested and the framework is strong. He talks about how sometimes frameworks come out and they weren’t well thought out. Rails is not an academic framework but it is easier to integrate or upgrade to by design.

What contributions have you made to the Ruby community?

Brian talks about getting the Rails deploy working for Windows is one of his proudest moments. Other than that his contribution has been mainly helping people find mentors at On, most of the work is done by volunteers and help a lot of people. Charles adds that sometimes open source project contributions tend to get glorified but things like are really what make the community great.

What are you working on now?

Brian talks about how he is working on a book but he can’t tell much about it at the moment. He also works on the content team on Digital Ocean. He helps other community authors with their writing and to get it published and out. He also handles some system admin background to test that each article works and he finds it a good way to keep his skills tuned. He is also working on a project in Elixir for teachers to work in the classroom better. For a teacher teaching development they can use the program, CodeCaster, to display code to the screens and the students can do things like flag things they don’t understand or let the teacher know that it’s moving too fast. It allows the students send up code for the teacher to check as well as the teacher get a snapshot of what’s on the students screen to check on them.



Exercises for Programmers
Tmux 2 Productive Mouse-Free Development


Coursera on AI
Artificial Intelligence in Python



          Jaspersoft Studio 6.3.2   
A new, free, open source report designer
          OpenOCD and the Bus Pirate   
As an enthusiast  Open Hardware supporter, I regularly read the always brilliant Dangerous Prototype blog. Last week it featured a short but complete tutorial about unbricking a Seagate Dockstar with OpenOCD and the Bus Pirate. The Bus Pirate is an open source hacker multi-tool that talks to electronic stuff, and which can be used as a JTAG adaptor […]
          18 Paying Markets for Tech Articles   
The whole world is looking for techies, which means if your area of expertise is web development, website design, and/or all those things with confusing initials, you can write about it and make money!

Here is a list of publications that are hungry for your knowledge, and quite willing to offer you a decent amount of money for your articles, blog posts, and tutorials.

For more paying markets go HERE.

A List Apart explores the design, development, and meaning of web content, with a special focus on web standards and best practices. Length: 1,500 - 2,000 words. Payment: $200 per article. Read their submission guidelines.

SitePoint publishes articles about HTML, CSS, and Sass. Payment: $150 for articles and $200 for tutorials. A tutorial is generally any in-depth article that has either a demo or code download link or that is very code-heavy in general, even if it doesn’t have an actual demo. Payment: $300 or more for articles and tutorials that are lengthier. Read their submission guidelines.

Word Candy provides content on a variety of topics relating to WordPress, online marketing and entrepreneurship. Payment: 6 cents/word. Read their submission guidelines.

The Layout features how-to articles on WordPress geared to business. They are looking for article from experts in the field, whether you're a designer, developer, or just a knowledgeable writer. Length: 700 - 1,200 words. Payment: Up to $150. Read their submission guidelines.

Tutorials Point publishes all kinds of tech-related tutorials. They are specifically looking for people having sufficient domain knowledge in the following areas: Information Technology, Software Quality management, Java technologies, Mainframe technologies, Web development technologies, Project Management, Accounting and Finance, Telecommunication, Big Data, Microsoft Technologies, Business Intelligence, SAP Modules, Open Sources, Soft Skills, Academic Subjects from Engineering and Management Syllabus. Payment: $250 - $500. Read their submission guidelines.

WPHUB - "all things WordPress." WPHUB focuses on the WordPress development community, specifically toward theme developers, plugin authors and customization specialists. They are looking for writers with some development background. Payment: $100 - $200 per article. Read their  submission guidelines. publishes articles and tutorials on Photoshop. Payment: $25 - $50 for articles, $50 for quick tips, and $150 - $300 for full tutorials. Read their submission guidelines.

Vector Diary publishes tutorials about Illustrator. "If you have anything interesting and new to share about illustrator, you are welcomed to write for Vectordiary. It can be a technique you have for your projects or it can be a step by step to draw an illustration. Anything readers are keen to know can be submitted." Payment: $150. Read their submission guidelines.

Linode includes a community of authors who contribute to Linode Guides and Tutorials. "We are always looking for guides on popular, trending topics, and updates to existing guides." Payment: $250. Read their submission guidelines.

The Write Stuff publishes articles about database development and management. Payment: $200 in cash and $200 in Compose database credits. Read their submission guidelines.

Indeni publishes articles about IT operations. "If you’re really good with firewalls, load balancers, routers, switches or severs, we’d like to work with you." Payment: $200. Read their submission guidelines.

The Graphic Design School Blog is looking for writers skilled enough with software to write a beginner tutorial in either Photoshop, Illustrator, InDesign, or open source design or utility software for designers. Payment: $100 - $200. Read their submission guidelines.

AppStorm caters to software users. Payment: Around $60, but the rate varies depending upon the type of article you choose to write. Read their Writer’s Guide before you submit your pitch.

Make Tech Easier is a tech tutorial site that teaches people the easier way to handle complicated tech. "We cover tutorials for various operating systems such as Windows, Mac and Linux, Mobile OS (iOS and Android), popular web app like Browsers (Firefox/Chrome), WordPress, and gadgets reviews. We are always looking for more writers to help us turn this site into something bigger and better." Payment: Amount not specified. Read their submission guidelines.

WorldStart is looking tips for their e-mail newsletter, WorldStart’s Computer Tips. This is published daily to 300,000 readers and focuses on tips and tricks the average computer user can utilize. "We are also seeking feature articles for our website covering any and all aspects of computing." Payment: $15 - $35. Read their submission guidelines.

Labmice is a site for serious techies. They are looking for Field Notes, Best Practices, lessons learned, white papers, written material, guidelines, how-to's, technical explanations, etc. about almost any It topic including Windows 2000 Administration, Computer Security, Technical Project Management, etc. "Obviously we want "real world" documents, and not things that are easily found in any textbooks. You must be the original author/creator of the document and most declare that all of the content of the submitted work is original, unless referenced with permission from the original author." Length: 1,000 to 1,500 words. Payment: Negotiated. Read their submission guidelines.

Tutorial Board is looking for tutorials in graphics by writers who are skilled with Adobe Photoshop, Adobe After Effect, Autodesk Maya or any other industry standard CG software.  Payment: Up to $150 p/tutorial. Read their submission guidelines.

Smashing Magazine publishes articles about Web development and design. "We aim for exciting, creative articles that also cover recent developments within the industry. Writing does take time, but substance is more important than length." Payment: Not Specified. Read their submission guidelines.

          Senior Software Test Automation Engineer - Jurong Island   
Familiarity with commercial and open source test automation and test case management technologies such as JMeter, Robot Framework, Selenium, Watir or Hudson etc...
From Jobs Bank - Tue, 27 Jun 2017 10:03:13 GMT - View all Jurong Island jobs
          Automation Test Engineer for open source frameworks (Investment Banking) - Pasir Ris   
Experience in Continuous Integration Tool – Jenkins / Hudson. Optimum Solutions (Co....
From Jobs Bank - Wed, 28 Jun 2017 09:54:55 GMT - View all Pasir Ris jobs
(pc-Google Images)
Linux, BSD, Solaris and other open source frameworks are defenceless against a nearby benefit acceleration vulnerability known as Stack Clash that enables an attacker to execute code at the root. Significant Linux and open source merchants have made patches accessible today, and frameworks running Linux, OpenBSD, NetBSD, FreeBSD or Solaris on i386 or amd64 equipment ought to be refreshed soon.

The hazard introduced by this defect, CVE-2017-1000364, winds up noticeably particularly if attackers are as of now show on a powerless framework. They would now have the capacity to chain this weakness with other basic issues, including the as of late tended to Sudo vulnerability, and afterwards run subjective code with the most noteworthy benefits, said specialists at Qualys who found the vulnerability. The vulnerability was found in the stack, a memory administration locale on these frameworks. The attack sidesteps the stack protect page moderation presented in Linux in 2010 after attacks in 2005 and 2010 focused on the stack.

Qualys prescribes in its consultative expanding the span of the stack monitor page to 1MB at the very least as a transient arrangement until the point when an update can be linked. It additionally prescribes recompiling all userland code with the –fstack-check choice which would keep the stack pointer from moving into other memory locales. Qualys surrenders, notwithstanding, this is a costly arrangement, however one that can't be crushed unless there is an obscure vulnerability in the –fstack-check alternative.

          CMDBf Specification Implementation Options   
-by Van Wiles, Lead Integration Engineer

The CMDBf 1.0 spec could be implemented in many ways by many parties.

Here are a few ways the CMDBf 1.0 spec (not a standard yet) can be implemented.


First - it is important to understand that the spec describes web services. These services can be written and delivered by anyone, not just the vendor of a particular CMDB or MDR. So let's say there are three scenarios (there may be more):


1. Each CMDB and MDR vendor produces its own CMDBf services for its own product lines
2. Third-party software vendors produce CMDBf services for popular CMDB and MDR products and market these adapters independently
3. Consulting service providers produce CMDBf services for custom applications to be used with other CMDBf implementations


I will guess that these are all viable options and could very well come to market in the reverse order from above (custom implementations first).


Second, the spec can be implemented in many ways - "push-mode", "pull-mode" or both, varying levels of query support, varying record types, etc.


Now comes a tricky question - can you market these adapters without a common data model, or an exponentially-expanding set of data-mapping objects? The point here is that some maturity will be required before this is really plug-and-play.


So, leaving the modeling/mapping question aside for now, let's see how a third-party could produce adapters for a pair of MDR's. I'll start with a picture.


In the picture above, all the CMDBf services and processes are provided by a third-party (call it Lavender Software.) Lavender has adapters for each MDR proprietary interface, plus transformers to register items and relationships from Vendor A to Vendor B CMDB, and a User Interface to query the registered CMDBf Query Services.


This picture could change if one or both vendors produce their own CMDBf services and the customer wants to switch. In this case, Lavender Software might provide a way to switch services to another vendor. In this case, you can imagine that it would be much less painful to exchange information between CMDBf services in a common format, at least in the case of registration. Additionally, it would be really helpful to the query UI if items of like-kind had common type-names (like Incident or Computer System for example).



In the final picture, vendors A and B have provided XML schemas and registration processes for their services, and a contract integrator has been hired to connect the services. All that is required is for the contractor to use the registered services and provide a query client interface for the customer.



Final note: open standards and open source go together like shoes and socks (except that standards smell better with age.) There is an open source project giving some very interesting insight on a CMDBf implementation over SML/CML starting here: This is part of the Eclipse COSMOS project which is sponsored by some of the original consortium companies. It should be a useful starting point if you are interested in implementing CMDBf.


The postings in this blog are my own and don't necessarily represent BMC's opinion or position.

          Platform9 Raises $22M to Make Open Source Cloud Infrastructure Tech Easier   

HPE, Redpoint, Menlo participate in Series C round led by Canvas Read More

           Open Source Festival: Open Source Festival soll "nicht viel größer werden"   
Philipp Maiburg, Macher des Open Source-Festivals, spricht über alte Helden, Amateurfußball und über nervige Fragen nach der Sicherheit.
          Symbian OS Goes Open Source   
On Wednesday, the foundation overseeing the world's most popular smartphone OS, Symbian, announced that it had made the operating system entirely open source.

Read "Symbian OS Goes Open Source"...
          Propensity to Cycle Tool – a new way to reach policy makers?   
Introduction: This presentation outlines the use of the Propensity to Cycle Tool (PCT).Methods: The PCT is a new, open source and freely available ( planning support system for England. Using a range of different scenarios, it highlights cycling potential at area and route level. Initial data are for commuting trips from the Census 2011. The […]
tkabber-1.1.2-alt1  build Vladimir D. Seleznev, 29 june 2017, 16:14

Group: Networking/Instant messaging
Summary: Tkabber is an open source Jabber Client.
- 1.1.2 released
- added tcl-xmpp subpackage
          Comment on Kontakt by Rudy   
Hallo, Ich bin auf der Suche nach dem RRZE icon set, welches is fuer viele meiner Open Source Projekte verwende. Aber leider ist dieses nicht mehr ueber verfuegbar. Koennten Sie mir bitte sage, wo dieses Icon Set jetzt zu finden ist? Im Voraus vielen Dank! /rudy
          Comment on 2017’s Best 5 Bitcoin wallets for your Android mobile device Reviewed by Ivor   
Agree with Scott above... Coinomi is pretty cool and probably deserves a lot more attention. I looking at the 99BTC reviews and Exodus it rated 9 and Trezor 8.9? WTF? Exodus reuses addresses, isn't HD, and isn't open source! I noticed it only gives the user one address when I planned on storing my Decred on it and I did a bit of digging. Yeah, research this shit *before* sending money to it... I know! I think maybe Exodus it has multiple addresses for Bitcoin. It is certainly a nice looking wallet, and Shapeshift txns seem to go more smoothly on it than on Jaxx.
          MBRFilter, ayuda a proteger la MBR contra el malware   

MBRFilter es una herramienta Open Source desarrollada por por Talos-Cisco y es un driver para Windows que modifica el estado del MBR y lo configura en "modo solo lectura", para que nadie pueda escribir dicho sector sin autorización.

etiquetas: seguridad

» noticia original (

          Offer - Thunderbird Tech Support %1(888)-337-5333% - USA   
Contact Mozilla Thunderbird Technical Support @ +1-888-337-5333 USA/CA by phone 24*7 to fix all issues. Mozilla Thunderbird Customer Support number, ...Mozilla Thunderbird is a cross-organize, open source email customer that battles with any likeness of Microsoft Outlook and Eudora Mail. Mozilla Thunderbird begun as a branch of the Mozilla Firefox Web program expand and was made by a near get-together until the moment that the minute that Mozilla Corporation discharged change to an independent relationship to concentrate endeavors on Firefox. Thunderbird email support phone numberThunderbird t echnical support phone numberThunderbird contact numbermozilla firefox technical support phone numberThunderbird email support phone numberThunderbird technical support phone numberThunderbird contact numberThunderbird tech support phone numberThunderbird technical support numbermozilla Thunderbird phone numberThunderbird customer service numberThunderbird customer supportThunderbird customer support numberThunderbird customer service phoneThunderbird email support phone numberThunderbirdt echnical support phone numberThunderbird contact numbermozilla firefox technical support phone numberThunderbird email support phone numberThunderbird technical support phone numberThunderbird contact numberThunderbird tech support phone numberThunderbird technical support numbermozilla Thunderbird phone numberThunderbird customer service numberThunderbird customer supportThunderbird customer support numberThunderbird customer service phoneThunderbird email support phone numberThunderbirdt echnical support phone numberThunderbird contact numbermozilla firefox technical support phone numberThunderbird email support phone numberThunderbird technical support phone numberThunderbird contact numberThunderbird tech support phone numberThunderbird technical support numbermozilla Thunderbird phone numberThunderbird customer service number
          Microbe-ID: an open source toolbox for microbial genotyping and species identification   
          BEST PHP Training in Noida   
PHP (recursive acronym for PHP Hyper Preprocessor) is a widely-used open source general
          Site Reliability Engineer (Software) - Indeed - Andhra Pradesh   
Serve as subject matter expert for multiple proprietary and open source technologies. As the world’s number 1 job site, our mission is to help people get jobs....
From Indeed - Fri, 23 Jun 2017 07:16:14 GMT - View all Andhra Pradesh jobs
          A Deep Learning Performance Lens for Low Precision Inference   

Few companies have provided better insight into how they think about new hardware for large-scale deep learning than Chinese search giant, Baidu.

As we have detailed in the past, the company’s Silicon Valley Research Lab (SVAIL) in particular has been at the cutting edge of model development and hardware experimentation, some of which is evidenced in their publicly available (and open source) DeepBench deep learning benchmarking effort, which allowed users to test different kernels across various hardware devices for training.

Today, Baidu SVAIL extended DeepBench to include support for inference as well as expanded training kernels. Also of

A Deep Learning Performance Lens for Low Precision Inference was written by Nicole Hemsoth at The Next Platform.

          inhereted a server   

OK so i just got a SunFire V20z server with dual AMD procs, and a pair of 78 GB 10k drives and 4 GB of RAM. So i am going to install Windows 2003 Standard with the drives mirrored is there anything else i should do to maximize the install? Looking for any input except using any open source OS as it is not permitted in my organization.



This topic first appeared in the Spiceworks Community
          JigSpace: A New Free Application for Making 3D Presentations   

3D optical illusion

In my opinion, one of the functions of libraries and librarians is to facilitate the sharing of ideas, particularly ideas that can move the world forward. In that spirit, I want to tell everyone about a new free downloadable application named JigSpace. With this Window or Mac desktop app, anyone can create 3D animated presentations called Jigs. Jigs can explain, show, or teach anything in an intuitive and memorable way.

If a Picture Is Worth 1,000 words, a Jig Is Worth 1,000 Pictures

The people who invented JigSpace describe its advantages in this way: "We learn better in 3D. Jigs are better. If a picture is worth 1,000 words, a Jig is worth 1,000 pictures. Jigs will reach your audience faster and make a bigger impact than any other media." This might be overstating the case some, but the only way to find out is to see what Jigs are about.

  • First, here's a 4-minute video on the AngelList website introducing JigSpace.
  • Check out Zac Duff's tweet from April 2017. The high school student within me jumps up and down when seeing that. Zac Duff is a JigSpace co-founder and also an artist, a programmer, and a designer — a rare combination of creative traits.
  • To see how JigSpace might be put to use, see my YouTube video of a wacky idea of mine to Cool Chicago Using Saved Winter Ice. I created this presentation using LibreOffice and recorded it for YouTube with Simple Screen Recorder, on my Linux laptop.
  • Now, look at the Jig on my idea that was produced by the folks at JigSpace. I've had better success viewing this Jig with Firefox than with Chrome.

Notice how you can move through the 17 slides in this digital presentation. And you can also grab any slide with your mouse and view the scene from different angles — somewhat like the orbit tool in the 3D modeling software SketchUp.

To get going with JigSpace, see their simple Quickstart Guide. You can also ask any question in their discussion forum.

Everyone Can Be an Inventor

When I was in high school, I was constantly thinking of inventions I wanted to build. Having a tool like JigSpace would have allowed me to communicate what I was thinking to others. I'd venture to say that skill at building Jigs would boost the inclination of people to think of new ideas in 3D. We need all ideas on deck these days.

My Hopes for JigSpace

I want to note that JigSpace is very new. The application called Jig Workshop is alpha-stage software and so is still under development. The downloadable desktop app is for Windows and Mac, but I hope it will also be available for Linux clients as well.

If you know any youngsters who love using SketchUp, Blender, and other 3D tools, tell them about JigSpace. While it is intended for youth and adults, I see JigSpace developing a strong following with the younger set. It might even be fun for school districts, cities, and states to run contests to see which schools (and individual students) can create the best Jigs in a fixed amount of time. Maybe we could even have FIRST Robotics teams compete against each other to describe ideas in 3D using JigSpace.

A Software Tool That Works Well with JigSpace

Keep in mind, too, that skill at using other software, such as drawing applications like Inkscape, might come in handy when building Jigs. Inkscape is free software that runs on Linux, Mac, and Windows. It is a favorite of schools and school districts teaching digital design. The best place I know of to learn Inkscape skills are the 100 high-quality screencasts here.

About the Author

Phil ShapiroPhil Shapiro is a librarian, educator, and technology access activist in the Washington, D.C., area. He has found inspiration in the learning that goes on at after-school programs, adult literacy organizations, public libraries, and organizations bringing music instruction and the arts to children. He is a true believer in public libraries as the central social, educational, and creative institutions in our communities.

Image: Fred the Oyster / CC0

          Hardware Automated Dataflow Deployment of CNNs. (arXiv:1705.04543v3 [cs.OH] UPDATED)   

Authors: Kamel Abdelouahab, Maxime Pelcat, Jocelyn Serot, Cedric Bourrasset, Jean-Charles Quinton, François Berry

Deep Convolutional Neural Networks (CNNs) are the state of the art systems for image classification and scene understating. However, such techniques are computationally intensive and involve highly regular parallel computation. CNNs can thus benefit from a significant acceleration in execution time when running on fine grain programmable logic devices. As a consequence, several studies have proposed FPGA-based accelerators for CNNs. However, because of the huge amount of the required hardware resources, none of these studies directly was based on a direct mapping of the CNN computing elements onto the FPGA physical resources. In this work, we demonstrate the feasibility of this so-called direct hardware mapping approach and discuss several associated implementation issues. As a proof of concept, we introduce the haddoc2 open source tool, that is able to automatically transform a CNN description into a platform independent hardware description for FPGA implementation.

          Java Open Source Developer   

          uas SIMD   
Nama : rendra
Nim : 208400776
Jurusan : manajemen dakwah
Mata kuliah : system informasi manajemen dakwah

1 . jelaskan apa yang dimaksud dengan database?

Pengertian Database
• Database adalah kumpulan informasi yang disimpan di dalam komputer secara sistematik sehingga dapat diperiksa menggunakan suatu program komputer untuk memperoleh informasi dari basis data tersebut.
• Database adalah representasi kumpulan fakta yang saling berhubungan disimpan secara bersama sedemikian rupa dan tanpa pengulangan (redudansi) yang tidak perlu, untuk memenuhi berbagai kebutuhan.
• Database merupakan sekumpulan informasi yang saling berkaitan pada
• Database adalah susunan record data operasional lengkap dari suatu organisasi atau perusahaan, yang diorganisir dan disimpan secara terintegrasi dengan menggunakan metode tertentu dalam komputer sehingga mampu memenuhi informasi yang optimal yang dibutuhkan oleh para pengguna.
2 . sebutkan beberapa aplikasi untuk pembuatan database! Jelaskan keunggulan dan kelemahannya!
• database. Blob help to implement the second one. Using Delphi 6 as dari 8- bit ke 7-bit membawa konsekuensi bertambahnya ukuran data hasil konversi f. Jalankan program aplikasi dengan menekan tombol F9. Aplikasi ini akan Noprianto telah mencapai versi stabil 1.1.7, dan versi Walaupun sangat mirip dengan Delphi, terdapat beberapa perbedaan yang cukup mendasar.
• Untuk aplikasi database (PostgreSQL), dengan tidak meng-embed gambar atau- Ketika program dibangun dengan benar dan kode dapat dipergunakan ulang di Di susun oleh : Di susun oleh : Team Penyusun Modul Team Penyusun Versi terkini dari delphi adalah versi 7 dengan tambahan vitur .net dengan Atau = Dalam aplikasi program kita sering menggunakan command button, Data Base Dekstop adalah merupakan sebuah system aplikasi database yang sudah terprogram.

• aplikasi database berbasis DOS. Untuk melakukan sebuah Untuk hasil yang diperoleh dari program yang telah dibuat Pembuatan aplikasi OLAP dengan menggunakan Borland. Delphi 7. 5. Pengujian dan analisis Sistem Keamanan Pintu Gerbang Berbasis AT89C51 Teroptimasi Program aplikasi pada personal komputer dibuat dengan Borland Delphi. Tabel -tabel ini dibuat dengan utilitas Database Desktop, 7. 33 31 34 32 X-CODE Magazine aplikasi yang dicompile dengan menggunakan Delphi versi baru, aplikasi database. Penggunaan nya yang relatif gampang menjadikan banyak dipakai. program berhasil dilakukan",64,"Sukses"). SET SAFETY ON. CASE a=7

3 . jelaskan tentang system pengambilan keputusan?

Suatu system pengambilan keputusan, artinya model system yang dipergunakan untuk mengambil keputusan, dapat bersifat tertutup atau terbuka. System pengambilan keputusan yang tertutup menganggap bahwa keputusan dipisahkan dari masukan-masukan yang tidak diketahui dari lingkungannya. Sedangkan pengambilan keputusan yang terbuka memandang keputusan sebagai terjadi dalam suatu lingkungan yang kompleks dan sebagian tidak diketahui.
4 . apa urgensi system informasi manajemen dakwah bagi organisasi dakwah?jelaskan!
Urgensi adalah penyatuan, pengelompokan, dan pengaturan pengurus \organisasi untuk digerakan dalam salah satu kerja sebagaimana yang telah direncanakan. sangat penting karna Pengorganisasian merupakan syarat utama dalam manajemen:

5 . jelaskan tentang perancangan system informasi!
Perancangan system informasi itu suatu system yang terdiri atas kegiatan-kegiatan yang berhubungan, yang memenuhi tujuan-tujuan suatu perusahaan yang penting seperti pengendalian inventaris atau penentuan waktu (penjadwalan) produksi. Tujuan perancangan system data akan membentuk logika. Dengan logika system menerima masukan dari lingkungan luar, menginterprestasikan masukan, dan kemudian mengambil keputusan-keputusan yang menhasilkan laporan-laporan. Apabila tujuan perancangan kurang baik, maka suatu system dalam perusahaan pun akan kurang baik pula atau gagal.
6 . bagaimana cara mengelola sebuah system informasi?jelaskan!
Dalam proses transformasi e-Gov, terdapat 4 tahapan yang pada akhirnya akan mengarah pada optimalisasi nilai manfaat dan kenaikan kompleksitas aplikasi, yaitu :
• Tingkat 1 – Persiapan : Pembuatan situs web sebagai media informasi dan komunikasi pada setiap lembaga, berikut sosialisasi situs web tersebut untuk internal dan publik.
• Tingkat 2 – Pematangan : Pembuatan situs web informasi publik yang bersifat interaktif serta pembuatan antar muka keterhubungan dengan lembaga lain.
• Tingkat 3 – Pemantapan : Pembuatan situs web yang bersifat transaksi pelayanan publik, dan pembuatan interoperabilitas aplikasi dan data dengan lembaga lain.
• Tingkat 4 – Pemanfaatan : Pembutan aplikasi untuk pelayanan yang bersifat Government to Government (G2G), Government to Business (G2B), Government to Consumers (G2C).
Demi terwujudnya pengelolaan dan pengimplementasian Sistem Informasi yang terintegrasi penuh, tahap awal yang perlu dilakukan adalah pembuatan master plan, yaitu berupa konsep dan perencanaan e-Gov yang matang. Master plan ini selanjutnya dijadikan dasar setiap kali sistem informasi dibuat dan diimplementasikan sesuai dengan kebutuhan masing – masing lembaga pemerintahan daerah. Selain disesuaikan dengan karakteristik masing – masing daerah, master plan yang dibuat juga harus selaras dengan Rencana Strategis Daerah atau RPJMD.
Sistem informasi saat ini diarahkan pada integrasi sistem yang pada akhirnya akan berguna pada pengimplementasian Executive Information System (EIS.) . EIS berguna pada tingkat pengambil kebijkan di level strategis atau setingkat Bupati / Gubernur di level kepangkatan. Pada level ini, data tidak hanya berfungsi sebagai informasi, tetapi sudah berupa pengetahuan sebagai dasar pengambilan keputusan / penentuan kebijakan. Pengetahuan (knowledge) adalah informasi yang telah diorganisir dan diproses sehingga menjadi pemahaman dan keahlian ketika diterapkan pada situasi aktivitas yang terjadi.

7 . apa yang dimaksud dengan system informasi manajemen dakwah? Jelaskan!

System Informasi Manajemen Dakwah menurut Robert G. Mudrick dan Joel E. Ross dalam bukunya adalah proses komunikasi dimana informasi masukan (input) direkam, disimpan dan diproses untuk menghasilkan output yang berupa keputusan tentang perencanaan, pengoprasian, dan pengawasan.

Menurut Drs. Soetodjo Moeljodiharjo dalam bukunya “management information system” mendefinisikan SIM adalah suatu metode untuk menghasilkan informasi yang tepat waktu bagi manajmen tentang lingkungan luar organisasi dan kegiatan operasi didalam organisasi dengan tujuan untuk menunjang proses pengambilan keputusan serta memperbaiki proses perencanaan dan pengawasan.

8 . apa mampaat system informasi manajemen buat para manajer dan para da’i?
Manfaat Berwujud (tangible benefit)
Sebuah sistem informasi yang dibangun dan dipelihara dengan baik akan memberikan manfaat berwujud yang secara faktual dapat dilihat pergerakannya melalui pendapatan yang diraih serta biaya yang dikeluarkan oleh perusahaan.
Indikator dari keberhasilan/manfaat yang berdampak pada peningkatan pendapatan adalah meningkatnya penjualan dalam pasar yang sudah ada serta perluasan ke pasar yang baru.
Sistem informasi yang baik dapat digunakan tidak hanya untuk penyimpanan data secara elektronik saja tetapi harus mampu mendukung proses analisis yang diperlukan oleh manajemen.
Sehingga dengan dukungan sistem informasi yang baik maka dapat diperoleh informasi yang akurat, terpercaya, mutakhir dan mudah diakses mengenai kondisi penjualan perusahaan.
Dengan adanya laporan yang tersaji dengan cepat dan setiap saat dapat diakses tersebut maka keputusan-keputusan yang diambil pun dapat lebih cepat dan presisi terhadap dinamika pasar yang ada.
Sedangkan dari sisi pengurangan biaya dapat dilakukan analisis faktual atas pengurangan jumlah sumber daya manusia yang dilibatkan dalam bisnis, pengurangan biaya operasional seperti pasokan maupun overhead, pengurangan barang/material dalam stok gudang, pengurangan biaya pemeliharaan dan penyediaan perlengkapan yang tidak terlalu mahal.
Contoh dari pengurangan jumlah sumber daya manusia adalah dalam proses pencatatan transaksi keuangan. Jika sebelumnya proses di akunting harus dikelola minimalnya oleh lima orang maka dengan implementasi SIA (sistem informasi akuntansi) yang baik cukup dikerjakan oleh satu orang saja.
Hal ini disebabkan dengan SIA yang terintegrasi maka setiap proses pembukuan dapat diproses langsung dari masing-masing bagian terkait tanpa harus melalui proses pengisian ulang data.
Selain itu secara otomatis dengan penerapan SIA maka laporan-laporan keuangan dapat disajikan berdasarkan data-data transaksi tersebut tanpa re-entry.
Masalah penumpukan pasokan material produksi yang selama ini sering menjadi beban aktiva perusahaan dengan penerapan modul SCM (supply chain management) dalam sistem informasi yang dikembangkan sangat membantu memecahkan masalah tersebut.
Dengan dukungan SCM yang baik maka penumpukan stok material produksi dapat ditekan seminimal mungkin. Dimana perusahaan cukup memesan kepada para pemasok hanya pada saat mencapai batas minimum persediaan.
Harga yang didapat pun bisa sangat kompetitif karena diperoleh dari beberapa pemasok sehingga tentunya hal ini sangat menguntungkan perusahaan.
Penekanan pada jumlah tenaga kerja tentunya berdampak pada turunnya jumlah investasi perlengkapan yang harus diinvestasikan yang berdampak pula pada turunnya biaya pemeliharaan.
Manfaat Tak Berwujud (intangible benefit)
Seringkali manfaat tak berwujud inilah yang menjadi titik kritis pada jalannya roda bisnis sebuah perusahaan.
Karena bersifat tak berwujud, aspek-aspek berikut seringkali diabaikan atau tidak terlacak resistensinya, yaitu:
1. Peningkatan kepuasan konsumen
Misalkan Anda datang ke sebuah toko swalayan. Mana yang kira-kira akan Anda pilih sebagai tempat berbelanja, toko yang waktu antrian di kasirnya lebih singkat atau sebaliknya?
Tentunya Anda akan memilih yang pertama sekalipun mungkin harus membayar sedikit lebih mahal dibandingkan dengan toko kedua.
Ternyata toko pertama sudah menerapkan sistem informasi penjualannya yang lebih cepat dalam pemrosesan dan kemudahan pemasukan datanya.
2. Peningkatan kepuasan karyawan
Seringkali muncul dari pihak karyawan yang merasa haknya tidak terpenuhi seperti misalkan insentif lemburnya.
Ternyata hal ini terjadi akibat kesalahan perhitungan pihak manajemen yang masih melakukannya secara manual atau dengan sistem pemasukan ulang data.
Padahal jika misalkan perusahaan menyediakan sistem absensi yang terintegrasi dalam sistem informasi kepegawaian dan SIA maka secara otomatis dapat dibuat laporan insenstif yang lebih akurat dan benar.
Hal tersebut baru salah satu contoh di luar misalkan perhitungan angka kredit, hak cuti, jenjang karier, pendidikan dan latihan, dsb.
3. Peningkatan mutu dan jumlah informasi
Informasi adalah komponen penting di jaman bisnis sekarang. Anda yang kuasai informasi akan bertindak lebih responsif terhadap perubahan dan tren di masa depan.
Penerapan sistem informasi yang baik tentunya akan menghasilkan laporan-laporan hasil kompilasi data yang dikelola oleh database yang berkualitas serta menyeluruh.
Hal tersebut dapat diwujudkan karena setiap proses pembuatan laporan tersebut dieksekusi secara otomatis oleh mesin komputer.
4. Peningkatan mutu dan jumlah keputusan manajemen
Tidak dapat dipungkiri bahwa setiap pengambilan keputusan sangat bergantung kepada informasi yang mendukung kebijakan yang akan diambil tersebut.
Hal tersebut hanya dapat terwujud jika sistem informasi dapat menyajikan informasi yang relevan, akurat, terkini dan dapat diambil setiap saat.
5. Peningkatan mutu dan jumlah respon atas kondisi pesaing
Aspek intelijen bisnis adalah hal yang sangat penting sejak kurun waktu yang lama dengan berbagai format dan keperluannya.
Untuk mencapai titik respon yang cepat dan tepat atas dinamika para pesaing maka diperlukan sistem informasi yang mampu mengumpulkan, menganalisis dan mengkompilasi informasi yang dibutuhkan oleh para pengambil keputusan di perusahaan.
6. Peningkatan efisiensi dan keluwesan operasional
Pemilik bisnis mana yang tidak menginginkan ini?
Semakin efisien dan luwesnya sebuah operasional maka hal ini menunjukkan semakin rendahnya biaya yang dikeluarkan untuk menjalankannya.
Hal tersebut dapat dicapai karena dipangkasnya rantai birokrasi dalam perusahaan setelah implementasi sistem informasi yang baik.
7. Peningkatan mutu komunikasi internal dan eksternal
Sebuah sistem informasi yang baik tentunya harus didukung oleh sistem jaringan komunikasi data elektronik yang handal juga.
Dengan penerapan sistem informasi yang baik maka setiap pihak baik di dalam maupun di luar perusahaan dapat bertukar informasi secara lebih efektif dan efisien.
8. Peningkatan mutu perencanaan
Perencanaan adalah proses yang penting bagi bisnis. Namun apapun perencanaan yang akan dibuat maka tentunya diperlukan dukungan informasi yang memadai dalam melaksanakannya.
Jika tidak maka perencanaan tersebut dapat kehilangan arah dan tidak mencapai sasarannya karena kesalah informasi yang menjadi basisnya.
9. Peningkatan mutu pengendalian dan pengawasan
Dengan sistem informasi yang dibangun dan dipelihara dengan baik maka setiap aktivitas di dalam lingkungan bisnis dapat terus-menerus dipantau.
Pemantauan tersebut tentunya berdampak pada peningkatan pengendalian atas setiap prosedur dan kegiatan yang terjadi di dalam perusahaan.

9 . jelaskan :
a. Protokol Jaringan

Protokol adalah suatu sistem komputer agar dapat berkomunikasi dengan komputer lain,dan kedua komputer tersebut harus mempunyai kesepakatan.Protokol juga bisa menggambarkan atau mendefinisikan apa yang di komunikasikan dan juga kapan terjadinya komunikasi tersebut.
Tiga macam protokol yang biasa di jaringan komputer adalah
1)NetBEUI(NetBios Extended User Interface)apa tu ya?
ini adalah protokol yang menggunakan aturan penamaan dengan 16 karakter,15 karakter untuk nama dan
1 karakter untuk entity,jaringan ini bersifat local.
2)IPX/SPX (internet/sequence packet exchange)ini hampir sama seperti NetBEUI,hanya menambahkan kemampuan routing dan remote console.
3)nah... ini dia yang biasa di gunakan namanya:
TCP/IP(transmission control protokol/internet protokol)aturan penamaannya dengan hanya beberapa angka,titik atau di sebut(dot decimal).

b. Jaringan optic

Pengertian dari jaringan computer adalah sekumpulan computer, printer, dan peralatan lainya yang saling berhubungan.informasi data dihubungkan melalui kabel sehingga memungkinkan pengguna untuk saling bertukat informasi , dokumen dan data ,serta mencetak dengan printer yang sama.selain itu juga dapat bersama mengunakan hardware atau software yang terhubung denagan jaringan.
Jaringan computer pada era globalisasi ini merupakan suatau keharusan karena pengguna jaringan computer dapat terbantu bekerja lebih cepat , praktis dan efisien baik tenaga maupun waktu.

c. Bandwith
Bandwidth (disebut juga Data Transfer atau Site Traffic) adalah data yang keluar+masuk/upload+download ke account anda.

10 . apa yang dimaksud dengan internet dan world wide web ? bagaimana cara keduanya mengubah peran yang dijalankan oleh system informasi dalam organisasi?
Pengertian internet
1. Merupakan sebuah rangkaian yang terbesar menghubungkan berjuta-juta
komputer di seluruh dunia.
2. Ianya merupakan sebuah rangkaian yang luas. (WAN)
3. Mengunakan protokol TCP/IP untuk penghantaran data dan maklumat.
4. Mengunakan talian komunikasi seperti kabel sepaksi dan fiber optik.
5. Boleh mendapat maklumat,mencapai maklumat,berkomunikasi,berniaga
belajar dan membeli belah dalam internet.
6. Menyediakan perkhidmatan komunikasi data seperti pemindahan fail,mel
elektronik dan kumpulan berita.
7. Dapat mengekseskan pengguna kepada pengguna yang lain mengakses
maklumat daripada laman-laman web yang terkandung di dalamnya.
1. WWW terdiri daripada satu sistem rangkaian yang menjalin komputer di
seluruh dunia dan menyalurkan maklumat dalam bentuk multimedia seperti bunyi,gambar,animasi,video dan juga teks.
2. Tapak atau tempat tersebut dapat dihubungkan melalui hiperpautan \\SERVER\5a_siti\nota tm.htm
(hyperlink) dan membenarkan pengguna bergerak dari satu dokumen ke dokuman yang lain.
3. Membenarkan pengguna bergerak dengan cepat dari satu dokumen ke
dokumen yang lain di dalam internet.
4. Merupakan satu sistem yang menyediakan kemudahan untuk mencari dan mencapai sebarang sumber maklumat yang terdapat dalam internet
dengan mudah dan pantas.

1. Mesej terlalu cepat.
2. Menyokong multimedia atau hypermedia.
3. Boleh berkomunikasi secara global.
4. Menggunakan capaian komputer jarak jauh.
5. Interaktif-membolehkan pengguna berhubung dan berkomunikasi lebih dari satu hala.
1. Kurang kepastian maklumat tepat.
2. Banyak perubahan di saat akhir perancangan.
3. Bahan perlu di tipis.
4. Interaksi subvertif tidak terkawal.
5. Kebocoran maklumat rahsia/peribadi.
6. Kegiatan jenayah yang boleh mengancam sama ada individu mahupun negara.
7. Penggodaan mudah berlaku.
8. Iklan yang tiada batasan mudah di perolehi seperti iklan-iklan pornografi.
Kepentingan Internet
1. Memberi perkhidmatan konikasi elektronik.
2. Memberi perkhidmatan aplikasi capaian jauh.
3. Membolehkan pemindahan fail dengan mudah dan cepat.
4. Membolehkan penyebaran maklumat lebih luas dan dinamik.
5. Memberi perkhidmatan apliklasi capaian maklumat.

11 . apa tujuan system informasi dalam persfektif dakwah? Peranan apa yang dijalankannya dalam rantai nilai informasi dakwah?
Indicator ini menggambarkan pencapaian tujuan dalam jangka panjang seperi yang dirumuskan dalam tujuan (goals), baik dampak positif maupun negative. Indicator ini dapat diketahui, jika pengukuran dilakukan secara terus menerus dalam jangka waktu yang cukup lama.

suatu kegiatan dakwah bias dimungkin oleh berbagai sebab dan hal, sebagai berikut.
kemungkinan pertama karena pesan dakwah yang disampikan oleh dai memang relevandengan situasi dan kebutuhan masyarakat, yang merupakan satu keniscayaan yabg tidak mungkin ditolak, sehingga mereka menerima pesan dakwah itu dengan antusias.

kemungkinan kedua karena factor pesona dai, yakni dai tersebut memiliki daya tarik personal yang menyebabkan masyarakat mudah menerima pesan dakwahnya meski kualitas dakwah yang disampaikan sederhana

kemungkinana ketiga karena kondisi psikologi masyarakat mudah disentuh dan dalam kondisi yang haus akan disirami rohani,. Dan mereka terlanjur memiliki persepsi positif terhadap dai, sehingga pesan dakwah yang sebenarnya kurang jelas ditafsirkan sendiri oleh masyarakat dengan penafsiran yang jelas.
kemungkinana keempat yaitu karena dakwah disampaikan dikemas dengan menarik sehingga masyarakat yang semula acuh tak acuh terhadap agama, setelah melihat paket dakwah yang diberi kemasan lain misalnya lewat kesenian stimulasi. Maka dakwah yang dilaksanakan pun berhasil dan dapat diterima olrh masyarakat secara positif.

Tidaklah benar kalau keberhasilan dakwah hanya diukur dari banyaknya jama’ah yang hadir pada suatu upacara keagamaan, karena banyaknya jama’ah yang hadir hanyalah salah satu dari indicator saja. Keberhasilan dakwah dapat diukur dari munculnya kesadaran keberagamaan pada masyarakat akibat adanya dakwah, baik kesadaran yang berupa tingkah laku, sikap ataupun keyakinan.

12 . Definisikan dan jelaskan piranti lunak open source dan linux. Apa mampaat bagi keduanya?
Pengertian Software komputer adalah sekumpulan data elektronik yang disimpan dan diatur oleh komputer, data elektronik yang disimpan oleh komputer itu dapat berupa program atau instruksi yang akan menjalankan suatu perintah. melalui sofware atau perangkat lunak inilah suatu komputer dapat menjalankan suatu perintah

Linux adalah lagi yang cepat dan sistem operasi, dengan kemampuan untuk mengakomodasi beberapa pengguna, bertindak sebagai server Internet, dan dukungan yang mudah menggunakan antarmuka grafis. Memulai dengan Linux namun telah diidentifikasi sebagai tugas menakutkan karena merupakan salah satu tampilan yang nampaknya di pertama sekilas, kompleks untuk mata yang tak terlatih. Akibatnya, pendidikan telah menjadi Linux tersedia secara luas dan dapat diakses oleh mata tak terlatih. Hal ini memungkinkan mereka untuk mendapatkan sistem operasi Linux intim pengetahuan dan keterampilan yang dibutuhkan untuk menggunakan perangkat lunak komputer efektif.

13 . apa kegunaan dari kebijakan informasi dan administrasi data di dalam manajemen informasi?

Seiring dengan perkembangan teknologi komputer dan komunikasi, mengelola program keamanan komputer dan jaringannya akan menjadi semakin kompleks dan menantang. Manajer Keamanan Informasi harus menyusun dan memelihara program keamanan informasi untuk memastikan terpenuhinya 3 syarat dasar sumber daya informasi organisasi yaitu : 81) kerahasiaan, (2) keutuhan dan (3) ketersediaan data/informasi.
A. Kerahasiaan data/informasi
Yang dimaksud kerahasiaan (konfidensialitas) di sini adalah melindungi data/informasi dalam sistem agar hanya dapat diakses oleh pihak-pihak yang berhak saja. Pada masa lalu berkembang asumsi bahwa hanya militer dan diplomasi saja yang memiliki informasi yang harus dirahasiakan. Padahal sesungguhnya dunia bisnis dan individu pun memerlukannya. Terlebih dengan kemajuan teknologi komputer dan komunikasi serta kompetisi secara global, kebutuhan akan kerahasiaan informasi menjadi semakin meningkat.
Agar informasi tersebut dapat digunakan secara optimal, pendefinisian kerahasiaan harus dilakukan dengan tepat disertai prosedur pemeliharaan yang dilakukan dengan hati-hati. Aspek yang menonjol dari kerahasiaan adalah identifikasi dan otorisasi user seperti yang telah dibahas disini.
1. Ancaman terhadap kerahasiaan data/informasi
Kerahasiaan dapat dikompromikan dalam berbagai cara. Berikut adalah ancaman terhadap kerahasiaan informasi yang sering terjadi :
- Hackers : adalah orang-orang yang berusaha menerobos sistem pengendalian akses dengan cara mengambil keuntungan atas celah keamanan yang ada dalam sistem. Aktifitasnya menjadi ancaman serius keamanan informasi.
- Masqueraders : adalah pihak-pihak yang sesungguhnya tidak berhak namun mendapatkan hak akses dengan menggunakan user ID dan password pihak lain, untuk memperoleh keuntungan dari sumber daya komputer tersebut. Hal ini sering terjadi pada organisasi dimana karyawannya gemar bertukar password.
- Aktifitas pihak yang tidak berhak : terjadi akibat lemahnya pengendalian akses, sehingga memungkinkan pihak yang tidak berhak melakukan aktifitas dalam sistem.
- Men-download file rahasia tanpa pengamanan : download file rahasia dapat saja dilakukan, namun perlu kecermatan dalam prosesnya. Bila file rahasia dipindahkan dari komputer host yang aman ke komputer client yang tidak aman, file rahasia tersebut dapat saja diakses oleh pihak lain yang tidak berhak.
- LANs : jaringan komputer dapat menjadi ancaman terhadap kerahasiaan informasi, sebab data yang mengalir dalam LAN dapat saja dilihat oleh setiap orang dalam jaringan tersebut. Penyandian adalah salah satu cara paling baik bagi file rahasia saat ditransmisikan dalam LAN.
- Trojan horses : Adalah program aktif yang dirancang untuk menyusup dan meng-copy file-file rahasia. Sekali program trojan horse ini tereksekusi maka dia akan menetap dalam sistem dan secara rutin meng-copy file-file tertentu dan menempatkannya ke tempat yang tidak dilindungi.
Kesadaran memelihara keamanan informasi dari para user dan juga kedisiplinan para profesional keamanan informasi menjadi sangat penting untuk meminimalisir ancaman-ancaman tersebut.
2. Model kerahasiaan
Model kerahasiaan digunakan untuk menggambarkan tindakan yang harus diambil guna menjamin kerahasiaan informasi. Model ini berisikan spesifikasi alat dan bahan pengamanan yang digunakan untuk memenuhi tingkat keamanan yang diinginkan.
Model kerahasiaan yang terkenal adalah Bell-LaPadula. Model ini menggambarkan hubungan antara objek (file, program atau informasi) dan subjek (orang, proses atau devices). Hubungan tersebut dapat didefinisikan sebagai hak atau tingkat akses yang diberikan pada subjek (dikenal dengan sebutan security clearance) dan tingkat sensitifitas pada objek (dikenal dengan sebutan security classification).
Model kerahasiaan yang lain adalah access control, yang mengelola sistem dalam objek (sumber daya yang menjadi sasaran tindak), subjek (orang atau program yang bertindak) dan operasional (proses interaksi objek dan subjek).
Kriteria trusted sistem akan memberikan panduan yang baik pada penerapan model kerahasiaan. Kriteria tersebut paling pas dibuat oleh Departemen Keamanan Informasi (kalau departemen ini ada).
B. Keutuhan data/informasi
Yang dimaksud dengan keutuhan (integritas) data disini adalah melindungi sistem dan data dari perubahan-perubahan yang tidak dikehendaki baik secara sengaja ataupun tidak terduga. Menjadi tantangan program keamanan adalah memastikan bahwa data dikelola pada keadaan yang telah diprogram.
Walaupun program keamanan tidak meningkatkan keakuratan, karena data diletakkan ke dalam sistem oleh users, namun dapat menolong memastikan bahwa semua perubahan adalah sesuai dengan yang telah diprogram.
Untuk menjaga keutuhan data, sistem perlu dilindungi dari manipulasi oleh pihak yang tidak berhak, kecurangan dan kesalahan operasi. Menjaga keutuhan data ini menjadi sangat diperlukan bila dihadapkan pada data-data penting yang sensitif seperti data laporan keuangan internal, sistem kendali produksi, lalu-lintas udara atau payroll karyawan.
Kebijakan kerahasiaan, identifikasi dan otentikasi adalah kunci dan elemen dari kebijakan keutuhan data/informasi. Ya, keutuhan data/informasi bergantung pada pengendalian akses.
1. Melindungi keutuhan data/informasi
Seperti halnya kerahasiaan data, keutuhan data dapat terancam oleh hackers, masqueraders, aktifitas pihak yang tidak berhak, men-download file rahasia tanpa pengamanan, LANs dan program malware (virus dan trojan). Semua ancaman tersebut dapat memicu perubahan data yang tidak dikehendaki.
Terdapat 3 prinsip dasar yang digunakan untuk mengendalikan keutuhan data adalah :
* Need-to-know access : penerapannya harus dalam kendali penuh dengan sesedikit mungkin berbenturan dengan kepentingan users. Hal ini penting mengingat program keamanan perlu menyeimbangkan antara kebutuhan keamanan informasi yang ideal dengan aktifitas produksi perusahaan.
Need-to-know access adalah jaminan bagi user untuk masuk kedalam sistem dan hanya mendapatkan hak akses yang sudah ditentukan disesuaikan dengan jenis pekerjaan/tugasnya.
* Pemisahan/pembagian tugas : (dalam bagian pengendalian administratif), pemisahan tugas ini dimaksudkan agar tidak ada seorang karyawan pun yang memegang kendali proses dari awal sampai akhir, sehingga transaksi data tidak dapat dimanipulasi tanpa semua yang terlibat dalam proses itu ikut serta.
* Rotasi tugas : Merotasi tugas yang diberikan kepada karyawan perlu dilakukan secara periodik, untuk menghindari kejemuan, kecurangan dan kecenderungan negatif lain. Namun rotasi tugas ini akan menemui permasalahan manakala sumber daya manusia dalam organisasi sangat terbatas dan tidak terlatih dengan cukup baik.
2. Model keutuhan data/informasi
Model keutuhan data digunakan untuk menggambarkan apa yang perlu dilakukan untuk menjalankan kebijakan menjaga keutuhan data dan untuk memenuhi 3 sasaran yaitu : (1) mencegah pihak yang tidak berhak membuat perubahan data/program; (2) mencegah pihak yang mendapatkan akses melakukan perubahan yang tidak semestinya atau bukan kewenangannya; dan (3) mengelola kekonsistenan data/program baik internal maupun ekternal.
Hal pertama yang perlu diperhatikan dalam membuat model keutuhan data adalah mengidentifikasi dan memberi label pada masing-masing data, tentang jenis perlakuan yang dilakukan dan penerapan 2 prosedur terhadapnya.
Prosedur pertama adalah memverifikasi kevalidan data. Prosedur kedua adalah perubahan transaksi data yang sah, yaitu merubah data ke bentuk lain yang merupakan bagian dari pemeliharaan.
Sistem penyedia keutuhan data selalu mengharuskan semua perubahan data tercatat, untuk memudahkan dalam pengecekan atau pemeriksaan.
Aspek lain dari penjagaan keutuhan data adalah yang berhubungan dengan sistemnya sendiri yang mana sistem tersebut harus dapat selalu konsisten dan dapat dipercaya.
          Calibre 3.2.1   
Description:Calibre is a free and open source e-book library management applicationCalibre is developed by users of e-books for users of e-books. It has a cornucopia of features divided into the following main categories:Library ManagementCalibre manages your e-book collection for you. It is designed around the concept of the logical book, i.e., a single entry in […]
          German Nonprofit Creates New Open Source License for Seeds   

(Photo: Vidans / Getty Images)(Photo: Viadans / Getty Images)

We know about open-source software and hardware, but can the concept -- decentralized development and open collaboration for the common good -- be expanded to address other global challenges? The nonprofit OpenSourceSeeds based in the German town of Marburg has just launched a licensing process for open-source seeds, to create a new repository of genetic material that can be accessed by farmers around the world, in perpetuity.

We spoke with one of the leaders of this initiative, Dr. Johannes Kotschi, to learn more about exactly how the open source model was adapted for seeds, and why this initiative is so important in an era of increasing global concentration of power in the agriculture industry.

Nithin Coca: Can you tell me a bit about the open-source seeds movement in Germany as well as around the globe? How big is it, is it growing, and who are the members?

Dr. Johannes Kotschi: Open Source Seeds (OSS) is a newly created organization, and we had our launch on the 26th of April in Berlin. We launched with a tomato called Sunviva. A tomato is quite a good symbol -- everybody likes tomatoes, and everyone can grow a tomato. From all over in Germany we got requests from gardeners, plant breeders, from open-source activists for our open source tomato.

We are an offspring of AGRECOL, [which] is about 30 years old and focuses on sustainable and organic agriculture -- mainly in the developing world. Within AGRECOL we started working on open source seeds about five years ago -- first as a small working group. 

There is a similar initiative in the United States -- the Open Source Seeds Initiative, based in Wisconsin -- but they are not licensing, they are giving a pledge to varieties. We have different strategies, we, OSS, pursue the legal strategy, and they pursue the ethical strategy, but we are working closely together.

How did the idea for creating open-source seed licensing emerge? Can you tell me about the process that led to the first licensed, open-source seeds? Were there any roadblocks or challenges you had to deal with?

We were inspired by persons -- Elinor Ostrom, an American sociologist who received the Nobel Prize for Economics for her finding that commons can be used in a sustainable way. Refuted the idea of the Tragedy of the Commons  -- in which common resources overused by the public and thus have to private property, the famous hypothesis given by a scientist, [Garrett] Hardin.

She said no, there are clear rules to managing the commons -- they are managed sustainably, and she defined seven principals. The other inspiration was a computer scientist Richard Stallman...who created the open source idea, and the general public license.

Our idea was to develop a similar something, like a Creative Commons license, but seeds do not fall under copyright, seeds fall under seed laws. So we had to find another legal area to design a license.

So we defined a license agreement that falls under German Civil Law, as a contract that is pre-written for use by a single party, not individually negotiated. We do not violate seed laws, they exist, our license is supplementary to the seed laws -- and this license protects seeds against patents, and against plant variety protection.

The license, in a sense, has the main principals of a creative commons license. The whole process took us roughly a year, mainly due to the fact that we had little funds, mainly had to rely on pro-bono contributions from lawyers.

Why is having a special license with definable rights so important to protecting seeds and promoting diversity in global agriculture?

Our license is quite radical. It says that if a seed is licensed, this seed, and all further developments and modifications [of that seed] fall under this license. So this means you start a chain of contracts -- if the person who has got the seed is giving further developments of this seed to a third person, he becomes a licenser, which means he or she is licensing a new variety

In theory, this can be indefinite. There is no way back to private domain. [Our license] does not allow any seed company to take the seed, use it for breeding, and put a patent on it. You can work with us, you can earn your money with it, but you have no exclusivity.

This is important because we are living in a time of not only privatization of genetic resources, but the monopolization of genetic resources. Big companies, they are interested in producing few varieties and extending and distributing these varieties for large acreages -- the larger the acreage, the larger their return through royalties.

But what we need is diversity in production, diversity in genetic resources, and we need diversity in breeders. It is a danger if you are depending on a few companies -- because they tend towards uniformity, their energy for creating innovation is decreasing because competition is getting less and less. They are also producing variety that do not respond to the needs we have. For example, these big seed companies do not provide what is needed for adaptation to climate change.

Monsanto and Bayer, for example, you will have a concentration of a company which has dominating position in producing pesticides and herbicides, and dominating the seed sector -- they will link these two businesses together. They will produce seeds that correspondent with sales of agrochemicals. But in agriculture we need less pesticides, more agroecology. We need genetic resources and plants that fight pest and diseases by resistance, not by chemicals.

Can you tell me a bit about what it means if a farmer uses an open-source seed rather than a private, or corporate alternative?

License, first all of all says, there is no limitation to the use of this seed by the farmer. The only limitation is to refrain from privatization. Commercial seeds have become extremely costly, but the other point which is more important, the characteristics of a variety are not fully meeting the needs farmers have today.

And this applies, in particular, to small farmers in the world who are not able to pay the high costs of seeds for seeds from the big companies, or who may not need the varieties which are offered.

How can open source licenses for seeds help stem, or shift, the growing concentration of power in a few large mega-corporations?

Our initiative is a small initiative which shows an alternative to the existing system, which aims to establish a second column of publicly owned seeds, in coexistence with private seed sector. I hope that over time that this column will grow and be a real alternative for farmers and ultimately, also consumers. To have a choice about what you grow, and what you eat. If you go on observing the market concentration, you are getting more and more dependent on what is dictated by the private sector.

Of course, in the first step, OSS has mainly a political impact. We are not yet in a position to say we have a fully fledged public domain on seeds. There is not yet a real choice -- this choice may develop, but at present we are just starting, and showing this as a mutual alternative to the existing system.

How do you plan to expand the number of open-source seeds? What is your strategy going forward to engage those working in all facets of the agriculture sector?

We are now in the first stage of putting the idea into practice. This includes working together with plant breeders, regulating seed transfers from plant breeders to seed producers, and from producers to traders while ensuring that the chain of contracts is not violated. These are practical and legal questions, not so difficult to answer, but it has to be done.

Our big challenge will be to extend the idea. But it will be an important task to get breeders to provide newly developed varieties to our initiative -- and we hope that this will grow the number of open source licensed varieties, satisfactorily.

Our license has stimulated initiatives in other sectors -- there is for instance -- the World Beekeeping Association -- they have on their annual meeting decided to use our open source license and adapting it for bees, and doing open source licensing for bees. Another initiative is thinking about open source licensing of microorganisms, and there’s a third one which explores possibilities of using open source licensing for animal genetic resources -- farm animals.

Lastly, we need people to help us spread the idea. As we are a nonprofit organization, we are happy to receive donations, and as far as the breeding community is concerned -- we are interested in requests from plant breeders to license their newly developed breeds. Our license is under German law, but it is valid in most countries. 

          Dec 5, 2017: PPPMB Seminar - Jason Stajich at Plant Science Building   

Jason Stajich
Professor, Department of Plant Pathology and Microbiology and Institute for Integrative Genome Biology, UC Riverside

I am interested in the process and mechanisms of evolution. I study this primarily in fungi using comparative, computational, and experimental tools. We utilize genome and RNA sequencing, sequence analysis, molecular evolution, and phylogenetics, and molecular biology tools to explore the functions of genes or genomic regions identified by analyses to be involved in processes we study.

Most of our work is focused in the zygomycete and zoosporic chytrid fungi (fungi that move!). We also have collaborative projects and interests in Aspergillus, Fusarium, Coccidioides, and Clavispora lusitaniae. The lab is increasingly moving towards questions that relate to symbioses with new projects on fungal-bacteria antagonism and on the biological symbioses that occur among fungi, algae, bacteria in desert Biological crusts. I also have a new interest in extremophile fungi and working on projects to understand the halophilic Hortaea werneckii and endolithic Antarctic fungi through genome sequencing and laboratory experiments.

I am involved in many fungal genome projects including co-leading the 1000 fungal genomes project with the JGI and the zygolife project.

In the broader scope I am interested in the evolution of multicellular forms and regulation of development in fungi. I think understanding how differential gene regulation is established can help learn more about the mechanisms of cell type differentiation. We are also studying the cell wall to understand how innovations in the cell wall and dimorphism impact interactions between pathogenic fungi and hosts they infect. These different projects seek to provide new insight into the big picture of how the complexity of life evolved and how host and pathogen interactions co-evolved.

To address this work we also need tools to sift and mine the gigantic datasets that genomics can produce. I have focused on building tools for comparative and computational analyses of genomes including work on the BioPerl and Gbrowse projects and the development of open source software for bioinformatics and life sciences research through the Open Bioinformatics Foundation.

The lab is also focused on the development of databases for fungal genome data to make the genome and functional information more available. I also blog about interesting findings in fungal, microbial, and genome research and share protocols and coordinate projects through a wiki site.

View on site | Email this event

          (USA-FL-Tampa) Pashto/Dari Social Media Analyst   
Pashto/Dari Social Media Analyst Tracking Code 1897-987 Job Description COLSA Corporation is currently seeking experienced candidates for a Pashto/Dari Social Media Analyst position. This position requires outstanding Pashto/Dari written communication skills, as well as strong English communication and writing skills, and the ability to write and edit reports. **This position will require participation in 24/7 on-call work schedules as per customer direction and the candidate selected must be willing and able to work those hours as needed. Participation in both normal business hours and on-call schedules will rotate based on client-driven requirements. ** The selected applicant will read, analyze, and draft communications regarding regional and ideological discussions in specified foreign language media environments. The analyst will develop and maintain close familiarity with designated regional issues and be able to draw upon publicly available online information resources. The analyst will analyze current media statements or postings to predict trends and identify key communicators. The analyst will demonstrate high-level reading and writing capability in the assigned language, incorporate custom social media solutions to understand online environments and communicate trends effectively. The analyst may assist in operations planning. Working knowledge of online social media and different technologies is required. The analyst will keep documentation such as notes, informative papers, and/or white papers written in English. Some travel may be required. Required Experience * Bachelor's degree, military intelligence training or equivalent experience. * Outstanding Pashto and Dari communication and writing skills (ILR 3 or higher for native speakers; 2+ or higher for heritage speakers). * Minimum of 3-5 years of related experience with at least 1-3 years of experience in or directly applicable to MISO and/or open source/social media analysis and reporting. * Strong English communication and writing skills. * Working knowledge of desktop applications, world-wide-web and social media. * Aptitude to learn and utilize new software capabilities. * US Citizenship required. Candidate must possess a valid DoD security clearance. Applicant selected will be subject to a government security investigation and must meet eligibility requirements for access to classified information. COLSA Corporation is an Equal Opportunity Employer, Minorities/Females/Veterans/Disabled. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, or national origin. Job Location Tampa, Florida, United States Position Type Full-Time/Regular
          (USA-FL-Tampa) Cyber Intelligence Analyst 3 / 4   
**Cyber Intelligence Analyst 3 / 4** **Requisition ID: 17015129** **Location\(s\): United States\-Florida\-Tampa** **US Citizenship Required for this Position: Yes** **Relocation Assistance: No relocation assistance available** **Travel: Yes, 25 % of the Time** Northrop Grumman Mission Systems is actively recruiting mid/senior level Cyber Intelligence Analyst team members in support of an existing long\-term Government Program Office\. This position will entail working with our experienced Cyber Operations team developing solutions and analysis in support of emerging DoD Cyber efforts\. We're looking for a highly motivated, team oriented, individual with the ability to communicate / coordinate highly technical analysis directly with the customer and distributed team members\. **Knowledge, Skills and Ability:** • Conducts research and evaluates technical and all\-source intelligence with specific emphasis on network operations and cyber warfare tactics, techniques, and procedures focused on the threat to networked weapons platforms and various information networks\. • Analyzes network events to determine the impact on current operations and conduct all\-source research to determine advisory capability and intent\. • Be able to conduct telecommunication and/or protocol analysis in order to analyze switching and signaling telecommunication protocols between different nodes in PSTN or Mobile telephone networks, such as 2G or 3G GSM networks, CDMA networks, WiMAX and so on\. • Prepares assessments and cyber threat profiles of current events based on the sophisticated collection, research and analysis of classified and open source information\. • Collects data using a combination of standard/non\-standard intelligence methods and business processes\. Produces high\-quality papers, presentations, recommendations, and findings for senior US government intelligence and network operations officials\. • Applies extensive technical expertise, and has full knowledge of other related disciplines\. Must be able to communicate effectively and clearly present technical approaches and findings\. **Problem Solving:** • Develop technical solutions to complex problems which require the regular use of ingenuity and creativity\. Must possess solid coordination abilities with existing Intelligence Community resources to develop full\-coordination Intelligence summaries\. • Work is performed minimal direction\. Exercises considerable latitude in determining technical objectives of assignment\. Completed work is reviewed from a relatively long\- term perspective, for desired results\. **This requisition may be filled at a higher grade based on qualifications listed below\.** **This requisition may be filled at either a Level 3 or a Level 4** **Typical Minimum Education / Experience:** **• Level 3** = 5 Years of experience with a Bachelor’s in Science; 3 Years with Masters; 0 Years with PhD\. Degree should be in systems engineering, computer science, software engineering, network engineering, information systems security or similar field\. 4 additional years of job experience may be substituted in lieu of college degree\. **• Level 4** = 9 Years of experience with a Bachelor’s Degree or; 7 Years with Master's Degree or; 4 Years with PhD\. Degree should be in systems engineering, computer science, software engineering, network engineering, information systems security or similar field\. 4 additional years of job experience may be substituted in lieu of college degree\. **Basic Qualifications for both Level 3 & 4:** • Active/Current Top Secret Clearance with the ability to obtain SCI Access\. • Must be willing to submit to Counter\-Intelligence Polygraph as needed\. **Preferred Qualifications for both level 3 & 4:** • Experience with DoD Offensive Cyber Operations, virtual environments and cyber systems design is preferred\. • DODM 8570\.01 certifications at IAT/IAM/IASAE Level 2 or 3 are preferred \(examples: Security , Network , CEH, CCNA, MCSA, MCSE, CISSP, etc\)\. • Active Full\-Scope Polygraph\. Northrop Grumman is committed to hiring and retaining a diverse workforce\. We are proud to be an Equal Opportunity/Affirmative Action Employer, making decisions without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability, or any other protected class\. For our complete EEO/AA and Pay Transparency statement, please visit www\.northropgrumman\.com/EEO \. U\.S\. Citizenship is required for most positions\. **Title:** _Cyber Intelligence Analyst 3 / 4_ **Location:** _Florida\-Tampa_ **Requisition ID:** _17015129_
          (USA-FL-Tampa) Federal - Full Stack Developer   
Title: Federal - Full Stack Developer Location: USA-Southeast Job Number: 00489871 Organization: Accenture Federal / Advanced Technology & Architecture Location: Arlington, VA Great outcomes are everything. It’s what drives us to turn bold ideas into breakthrough solutions that solve the toughest problems fast--the first time. So you can change how people work and live. Advanced Technology & Architecture professionals are responsible for delivering technology innovation and providing the backbone of our systems integration business. As an AT&A professional, you can have a primary skill focus on translating a business need into a robust and integrated technology solution. Technology Architecture professionals are skilled in highly prescriptive delivery approaches and methods, and the supporting development and run-time environments required to design, build and deploy custom application solutions. Specifically for this role, you will leverage your skills to implement these solutions on Java EE platforms. Job Description: These positions are within our Advance Technology & Architecture group. These professionals work with application and technical architects to translate a business need into a robust and integrated technology solution. They are skilled in delivery approaches and methods, and the supporting development and run-time environments required to design, build and deploy custom application solutions for Accenture Federal Services clients. Specifically for this role, you will leverage your skills to implement these solutions on Java EE platforms. Key responsibilities of the Full Stack Developer may include: Ownership of technical designs, code development, and component test execution to demonstrate adherence to the functional specification. Using configuration management and integration/build automation tools to manage and deploy Java code. Applying knowledge of common, relevant architecture frameworks in defining and evaluating application architectures. Performing code reviews and providing critical suggestions for fixes and improvements. Supporting issue analysis and fix activities during test phases, as well as production issue resolution. Troubleshooting and performance tuning Java-based applications. Developing and demonstrating a broad set of technology skills in Java technologies, Service Oriented Architecture concepts, Open Source libraries and frameworks, and technology architecture concepts. Collaborating within a project team comprised of talented employees with diverse and complementary skills. Leading a team of junior developers, including planning work activities and reviewing work products. Qualifications: Basic Qualifications: Minimum 4 years of experience developing web applications using Java or dynamic languages (e.g. Grails, Ruby on Rails) Minimum 4 years of experience with Java-based MVC frameworks (Spring MVC, Struts, JSF, Tapestry, etc.) Minimum 2 years of experience with Javascript frameworks (e.g.Angular (2 ), AngularJS (1.4 ), KnockoutJS, EmberJS, MeteorJS, ReactJS, Vue.js, etc.) Minimum 3 years of experience using HTML5/CSS3 in responsive web applications Preferred Skills: Experience with Java & JEE frameworks Experience with Frontend frameworks and libraries Experience with Eclipse/IntelliJ/NetBeans/FireBug/Chrome development tools Experience with databases, SQL, data modeling Experience with performance tuning Experience with mobile (IOS, Android) Experience with responsive web development (e.g. bootstrap, material design, zurb foundation, etc.) Experience dynamic stylesheet languages (e.g. SASS, LESS) Experience with Agile (SCRUM, Kanban), Standard SDLC, CMMI compliant projects Experience with issue tracking software (e.g. JIRA, BugZilla, Rational Suite) Experience with source control management (SVN, GIT, Mercurial, CVS) Experience with build/continuous integration (Maven, Ant, Grunt, Gulp, Node/NPM, Webpack, etc.) Previous consulting/contracting experience Bachelor's Degree in Computer Science, Engineering or Technical Science Demonstrated leadership in professional setting; either military or civilian An active security clearance or the ability to obtain one may be required for this role. Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Applicants for employment in the U.S. must possess work authorization which does not require now or in the future sponsorship by the employer for a visa. Accenture is a federal contractor, an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities. Equal Employment Opportunity All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process. Accenture is committed to providing veteran employment opportunities to our service men and women. Job: Software Engineering
          Open source express: Cloud Foundry, C-Radar, Smile, Alfresco   
En bref: Microsoft et Orange rejoignent la fondation Cloud Foundry, l'ex-Data Publica acquis par Sidetrade, Smile reprend l'agence Hypertexte, nomination chez Alfresco.
          Dave Gnukem 0.71   
Dave Gnukem is a cross-platform, open source 2D scrolling platform shooter inspired by Duke Nukem 1.... [License: Open Source | Requires: Win 10 / 8 / 7 / Vista / XP | Size: 20.7 MB ]
          Hugin 2017.0.0   
Hugin is an easy to use cross-platform panoramic imaging toolchain based on Panorama Tools. [License: Open Source | Requires: Win 10 / 8 / 7 / Vista / XP | Size: 34.2 MB ]
          Media Player Classic Black Edition (MPC-BE) Beta 1.5.1 Build 2670   
Media Player Classic - Black Edition (MPC-BE) is a free and open source audio and video player for Windows based on the original Media Player Classic project. [License: Open Source | Requires: Win 10 / 8 / 7 / Vista / XP | Size: 18.6 MB ]
          Getting started with C# on Linux   
I’ve heard lots of good things about C# but I’ve never tried it. I saw some codes and thought “fine, it’s like Java”. As I also have Linux on my primary machine, I was not interested in Microsoft technologies. However, a few years ago .NET Core was open sourced and it reached version 1.0 (now […]
          Lucid Planet Radio with Dr. Kelly: Open Source Reality: The Emergence of a Meta-Myth, with Mitch Schultz   
Episode      Ascomplexity continues grow, the evolution of consciousness imparts newrelationships to our understanding of reality, revealing the emergence of a newhuman story. Let’sexplore an open source approach to humanity’s collective knowledge, and remix ournarratives to create deeply layered allegories that re-contextualize reality.How can we imagine an evolving meta-myth that influences systemic changethrough transmedia storytelling? Join renowned producer, writer and dire ...
          Scribo (a multiblog client) is published   

The PetrSU-Nokia-NSN laboratory on wireless and mobile technology published the alpha-release of Scribo. The application is a client to various blog services (e.g., Livejournal and Blurty) running at user's mobile device. The current version supports Internet tablets Nokia N900 for the open mobile platform Maemo/MeeGo (Maemo 5).

Scribo is an open source application (Python and Qt) under GPLv2. The code is available at the Gitorius hosting infrastructure [url][/url]. Direct installation to a mobile device is possible via the Maemo developers community repository [url][/url]. More details about the Scribo project can be found at developers' wiki [url][/url] (in Russian).

We would be happy if you start utilizing Scribo and provide us your feedback. Please duplicate (or directly send) your comments to and your bugs to our bugzilla [url][/url].

          Firefox 54.0.1 64-bit   
Mozilla Firefox is a fast, light and tidy open source web browser. At its public launch in 2004 Mozilla Firefox was the first browser to challenge Microsoft Internet Explorer’s dominance. Since then, Mozilla Firefox has consistently featured in the t...
          Firefox 55.0 Beta 5   
Mozilla Firefox è un browser Web open source veloce, leggero e ordinato. Al momento del lancio, nel 2004, Mozilla Firefox fu il primo browser a sfidare il dominio di Microsoft e Internet Explorer. Da allora, Mozilla Firefox è sempre rimasto tra i 3 b...
          Microsoft nunca cambiará   

En el titulo quizás me equivoque, pero lo que Microsoft ha demostrado recientemente lo confirma, y es que Microsoft parece nunca aprender o no tener la mas mínima intención de mejorar. Se acaba de descubrir que violan la licencia GPL. Justo cuando parecía que Microsoft cambiaba y daba un paso hacia el software libre liberando código para el kernel Linux bajo licencia GPL, nos salen con que violaban las licencias estas del código abierto.

Aquí hago punto y aparte, pues no se trata de algo que solamente enoje a los linuxeros porque Microsoft no apoya el software libre, porque lo privativo es malo, etc. Se trata de una violación de licencia, hecha por una enorme corporación y que seguramente se librará impune.

La violación viene de unos drivers que incluían código GPL y binario privativo, combinados, cosa prohibida en la licencia GPL. Microsoft luego liberó el código privativo en cuestión, pero anunciándolo como si fuera algo bueno, cuando en realidad parchaba el error que estaban cometiendo.

Solo algo mas que decir: Típico de Microsoft

Mas información: Microsoft libera código para Linux en MuyComputer
Microsoft violó la GPL en MuyLinux

Artículos relacionados:
¿Como seria la distro Linux de Microsoft?
Microsoft, alejate de mi Firefox!
Si todo fuera hecho por Microsoft

          What is Arduino ?   
What is Arduino ?

Arduino is an open source electronic platform. It means that you don't need to pay for anything without the Arduino board and you can find a lot of articles and sources about it. All circuit schemas and pcb files can find them on the Internet so if you want you can make it in your house, you should know that it’s possible. In additionally you don't need an extra compiler circuit for Arduino because it comes with in Arduino ( PIC users can understand what I mean :) ) that is why you can install your embedded program to run it on Arduino via its USB port. Only one click is enough to do everything. That is all and it is pretty easy. 

There are a lot of kinds of Arduino boards so you can pick any Arduino that is available for your purpose and projects. I'm going to use Arduino Mega 2560 for my projects. It has Atmega 2560 microcontroller and there are a lot of ports such as analog I/0 port and digital I/O and serial com. ports so we are able to manage analog signals and digital signals with different ports. Moreover there are three serial (rx-tx) communication ports and I2C com. is also possible with this board too.  The most important point is its libraries. I think this is the biggest criterion for all languages because if the programming language has a a lot of libs. it means you can move faster with it on your projects. You can find a lot of libs. for lots of different electronic component so you don't need to fight with complex tons of code lines just follow these steps 1- buy an electronic component for example fingerprint reader, 2- find the library, 3- add it to your sketch and 4- use it in your project easily. Arduino programming language is also really easy to understand, it is user friendly :) I can guarantee that and I have to say one more good thing about Arduino. It has a lot of shields that is kind of electronic circuit such as a shield for sensors, motor driver shield or network shield that is to connect your Arduino to the Internet or any network so when you make a project if you need to a shield you can attach the shield on Arduino board simply. 

If you want to learn more information about Arduino you can visit the web site.

If you want to loot at my all Arduino projects you can visit They are not in English yet. I'm going to translate them too as this page so you should check it if you want to read my articles about Arduino. I recommend you my blog's Facebook page and Twitter account to follow news easly or you can add me your circle on Google plus.

Technology Laboratory on Twitter 
Technology Laboratory on Facebook

These are some general features of Arduino

Arduino Mega Features:
Operating Voltage5V
Input Voltage (recommended)9V
Input Voltage (limits)7-18V
Digital I/O Pins54 (of which 14 provide PWM output)
Analog Input Pins16
DC Current per I/O Pin40 mA
DC Current for 3.3V Pin50 mA
Flash Memory256 KB of which 8 KB used by bootloader
Clock Speed16 MHz

and this is the picture of my Arduino :)

This version of Arduino is not the small one but it is good for big projects :) I mentioned that at the beginning of this blog if we need smaller one or faster one, they are also available for Arduino users. You need a 9V. power source to use it perfectly or just you can connect it to your computer via usb port.

If you want to learn more information about Arduino you can visit the web site.

Did you see something wrong about my English ? Please let me know it and read this page. 

          OpenCV (Open Source Computer Vision)   

OpenCV bir çok platform üzerinde çalışabilen açık kaynak kodlu bir kütüphane. C, C++, C# için olan versiyonları mevcut. Ayrıca Intel tarafından desteklenmektedir. OpenCV ile bilgisayarlı görüntü (Computer Vision) ve görüntü işleme ile ilgili işlemler pratik bir şekilde yapılabiliyor. Kullanmak için derleyicinizin olması yeterli. Bir C++ projesi oluşturduktan sonra gerekli kütüphane ekleme işlemlerini tamamlayıp OpenCV kolay bir şekilde kullanılabiliyor. İçerisinde birçok kütüphane mevcut. Foksiyonlar sayesinde görüntü işlemek çok kolay bir hale geliyor. Genel bir olarak görüntü işleme ile ilgili bilgi var ise OpenCV foksiyonlarını kullanmak pekte zor değil gibi görünüyor :) İstenilirse fonksiyonların arka planında neler döndüğü incelenebilir.

OpenCv ile neler yapılabilir ? İnterneti az kurcaladığımda karşıma çıkanlar

- Nesne Algılama ve Tanıma
- Yüz Algılama ve Tanıma
- Renk Tanıma ve Takip Etme
- Web cam, resim ve video görüntüler üzerinde çalışma
- Görüntülere filtreler uygulama vb....

Not : OpenCV nin sitesine linkten bakabilirsiniz. (OpenCV C++ ve C# için Visual Studio 2010 üzerinde gerekli olan kurulum adımlarını ilerideki yazılarımda görsel olarak paylaşacağım.)

OpenCV ile yapılabilecek işler listesi yukarıdaki gibi uzayıp gidiyor. İndirilen dosya içerisinde bir çok örnek geliyor. Bu örnekler ve internet üzerindeki çalışmalar araştırıldıkça öğrenme işi hızlı bir şekilde gerçekleşiyor. Benim gözlemlediğim kadarı ile OpenCV yapısına anlaşıldıktan ve fonksiyonlara aşina olunduktan sonra kendi uygulamalarımı kolay bir şekilde yazabilecek seviyeye geldim. Burada eklemem gereken çok önemli bir şey var. Keşke zamanında görüntü işleme dersi sırasında daha bilinçli olsaydım. OpenCV kullanıken görüdüğüm kadarı ile o sırada boş boş oturarak çok şey kaçırmışım. Her ne kadar ders içerik olarak yeterli olmasada benim daha özverili olmam gerekliydi :P Çünkü şuan çoğu konuda başlıklar tanıdık fakat içerik benim için yeni geliyor.

OpenCV ile tanışalı henüz yaklaşık üç hafta olmasına rağmen (açıkçası benim için kütüphaneleri eklemek ve ilk projeyi derlemek en çok zaman kaybettiren iş oldu :D) bayağı bir yol kat ettim. Yaptığım uygulamalar ile ilgili yazılarımı ise kısa bir süre içerisinde hazırlayıp paylaşacağım. OpenCv için bilgimin biraz daha artmasını ve giriş seviyesi için güzel bir kaynak hazırlamayı hedefliyorum.

          Processing Programming Language