Retail Sales Associate, Colorado Springs, CO   
CO-Colorado Springs, CenturyLink (NYSE: CTL) is a global communications and IT services company focused on connecting its customers to the power of the digital world. CenturyLink offers network and data systems management, big data analytics, managed security services, hosting, cloud, and IT consulting services. The company provides broadband, voice, video, advanced data and managed network services over a robust 265,
          AT&T and China Telecom sign partnership deal   
AT&T and China Telecom announced they have signed a framework agreement that strengthens their cooperation to support the development of advanced network services for multinational companies operating in China.

Through the agreement, the companies will help multinational customers leverage secure global communications to support business growth in China and worldwide. AT&T and China Telecom will also jointly work to create new services in the areas of Internet of Things (IoT), cloud-based big data, Voice over LTE (VoLTE) roaming, and software-defined networks (SDN).

The companies noted that the new agreement renews the 20-year authorisation under which Shanghai Symphony Telecommunications (SST), the joint venture formed bAT&T, China Telecom and Shanghai Information Investments, was established in 2000. As part of the new agreement, the parties intend to expand the scope of SST and the locations it serves to enable the delivery of new business services and technologies to customers.

Specifically, under the renewed agreement the companies plan to:

1.         Help establish industry standards for SDN and support their adoption.

2.         Launch bilateral roaming tests, as contemplated in a previously executed roaming agreement for business customers.

3.         Explore the potential of VoLTE roaming.



          Nokia – IP networks re-imagined   
Recently we have seen Cisco predict that busy hour global IP traffic will grow 4.6-fold (35% CAGR) from 2016 to 2021, reaching 4.3 Pb/s by 2021, compared to average Internet traffic that will grow 3.2-fold (26% CAGR) over the same period to reach 717 Tb/s by 2021. The latest edition of the Ericsson Mobility Report, released earlier this week, calculates that the total traffic in mobile networks increased by 70% between the end of Q1 2016 and the end of Q1 2017. And now, Nokia Bell Labs has just announced its own prediction: IP traffic will more than double in the next five years, reaching 330 exabytes per month by 2022 while growing at a 25% CAGR. The company anticipates that peak data rates will grow even faster at nearly 40% annually. Nokia Bell Labs also predicts that 3D/4K/UHD will experience a 4.79x growth from 2017 – 22, that wireless traffic will experience 7.5x growth from 2017 – 22, and that worldwide IoT devices to grow from 12bn in 2017 to 100bn in 2025.

Nokia unveils next gen networking processing engine

Nokia's processing engine sets the stage for perhaps the most significant announcement from the company since the merger of Alcatel-Lucent and Nokia Siemens Networks in 2015. In a press event entitled 'IP networks reimagined', Nokia unveiled its FP4 silicon, featuring the 'first' 2.4 Tbit/s network processor, up to 6x more powerful than processors currently available. The proprietary chipset is designed for a new class of petabit-class routers.

Core routers traditionally have been the 'big iron' that powers the heart of the Internet. It is a product category dominated by Cisco, Huawei, Juniper and Nokia, including via its existing 7950 XRS routing platform. However, the market has been in flux. Earlier this month, Dell’Oro Group reported a significant break in Q1 17 with Huawei taking the top spot from Cisco in the core router market for the first time. The report also found Huawei taking over second spot from Nokia in the SP edge router and CES market. The primary reason cited for this shift is that the SP core routing business is only growing at a low single-digit rate, while China Mobile is defying the trend with significant investments in their IP core backbone, for which Huawei is the lead supplier. Nevertheless, the overall predictions for rapid growth in IP traffic over the coming five years makes it more likely that service providers will need a significant refresh of their core backbones to handle hundreds of 100 or 400 Gbit/s connections at major nodes.

Nokia's previous generation FP3 chipset, unveiled by Alcatel-Lucent in June 2011 and launched in 2012, packed 288 RISC cores operating at 1 GHz and leveraged 40 nm process technology; the FP2 chipset offered 112 cores at 840 MHz and was built in 90 nm. This network processor lineage can be traced back to TiMetra Networks, a start-up based in Mountain View, California that launched its first carrier-class routing platforms in 2003.

TiMetra, which was headed by Basil Alwan, was acquired by Alcatel-Lucent later in 2003 for approximately $150 million in stock. The product line went on to become the highly successful 7450, 7750 and eventually 7950 carrier platforms - the basis for the IP division at Alcatel-Lucent. Not bad for an idea from a small start-up to grow into the star platform underpinning all of Alcatel-Lucent + Nokia Siemens Networks.

In a launch day webcast, Basil Alwan, now president of Nokia's IP/Optical Networks business group, said we are moving into a new phase of the Internet requiring 'cloud-scale routing'. First, he noted that there is market confusion between Internet-class routers and core data centre switches, which are being used to power the hyperscale infrastructure of the Internet content providers. High-end, data centre spine switches are capable of routing packets at high rates and can handle access control lists (ACLs). Likewise, conventional big iron core routers can switch data flows, and are sometimes deployed in data centres. However, there have been tradeoffs when this role reversal happens. Nokia's new FP4 chipset aims to fix that.

First multi-terabit NPU silicon

Six years have passed since the FP3, or roughly two cycles in the evolution of Moore's Law, so naturally one would expect the new silicon to be smaller, faster and more powerful and efficient. But Alwan said the company took its time to rethink how the packet processing works at the silicon level. To begin with, Nokia redesigned the onboard memory, employing 2.5D and 3D layouts on 16 nm Fin Field Effect Transistor (FinFET) technology. The single chip contains 22 dies, including memory stacks and control logic. It runs at 2.4 Tbit/s half-duplex, or 6x more capacity than the current generation 400 Gbit/s FP3 chipset. The FP4 will support full terabit IP flows. All conventional routing capabilities are included. Deep classification capabilities include enhanced packet intelligence and control, policy controls, telemetry and security.

The FP4 could be used to provide an in-service upgrade to Nokia's current line of core routers and carrier switches. It will also be used to power a new family of 7750 SR-s series routers designed for single-node, cloud scale density. In terms of specs, the SR-s boasts a 144 Tbit/s configuration supporting port densities of up to 144 future terabit links, 288 x 400 Gbit/s ports, or 1,440 100 Gigabit Ethernet ports. Absolute capacity could be doubled for a maximum of 288 Tbit/s configuration. It runs the same software as the company's widely-deployed systems. The first 7750 SR-s boxes are already running in Nokia labs and the first commercial shipments are expected in Q4.

Nokia is also introducing a chassis extension option to push its router into petabit territory. Without using the switching shelf concept employed in the multi-chassis designs of its competitors, Nokia is offering the means to integrate up to six of its 7750 SRS-s routers into a single system. This results in 576 Tbit/s of capacity, enough for densities of up to 2,880 x 100 GBE ports or 720 x 400 Gbit/s ports. Adding up the numbers, it is not truly petabit-class, but at 576 Tbit/s it is more than halfway there.

Network telemetry leads to security
Another interesting twist concerns security and petabit-class routing. In December 2016, Nokia agreed to acquire Deepfield, a start-up specialising in real-time analytics for IP network performance management and security. Deepfield, founded in 2011 and based in Ann Arbor, Michigan, has developed an analytics platform that identifies over 30,000 popular cloud applications and services. Its Internet Genome tracks how traffic runs to and through networks to reach subscribers, in real time, and without the need for probes, taps and monitors in the network itself. At the time of the deal, Nokia said it would integrate Deepfield big data analytics with the dynamic control capabilities of open SDN platforms, such as the Nokia Network Services Platform (NSP) and Nuage Networks Virtualized Services Platform (VSP).

Expanding on this idea, Alwan said Deepfield can really leverage the routers rather than probes to understand what is happening to the traffic. Fewer probes mean lower investment. More importantly, Deepfield could be used to track DDoS attacks passing through the core of the network rather than at the edge destination target. The new FP4 silicon is said to be a very good match for this application.


          Cavium unveils FastLinQ 41000 10/25/40/50 GBE NIC   
Cavium announced the introduction of the FastLinQ 41000 Series products, its low power, second generation 10/25/40/50 Gigabit Ethernet NIC that is claimed to be the only such adapter to feature Universal RDMA.

Cavium's FastLinQ 41000 Series devices are designed to deliver advanced networking for cloud and telco architectures; the products are available immediately from Cavium and shortly due to be available from Tier-1 OEMs/ODMs in standard, mezzanine, LOM and OCP form factors.

The FastLinQ QL41000 family of standards-compliant 25/50 Gigabit Ethernet NICs offer support for concurrent RoCE, RoCEv2 and iWARP - Universal RDMA. The FastLinQ adapters, coupled with server and networking platforms, are designed to enable enterprise data centres to optimise infrastructure costs and increase virtual machine density leveraging technologies such as concurrent SR-IOV and NIC Partitioning (NPAR) that provide acceleration and QoS for tenant workloads and infrastructure traffic.

The new FastLinQ adapters also support network function virtualisation with enhanced small packet performance via integration into DPDK and OpenStack, enabling cloud and telcos/NFV customers to deploy, manage and accelerate demanding artificial intelligence, big data, CDN and machine learning workloads.

For telco and NFV applications, the products provide improved small packet performance with line rate packets per second for 10/25 Gigabit Ethernet, MPLSoUDP offload and integration with DPDK and OpenStack using the Mirantis FUEL plug-in. This allows telco's and NFV application vendors to deploy, manage and accelerate demanding NFV workloads.

Additionally, integrated storage acceleration and offloads such as NVMe-oF, NVMe-Direct, iSCSI, iSER and FCoE enable upgrades from existing storage paradigms to next generation NVMe and persistent memory semantics.

The products also offer zero-touch automatic speed and FEC selection via Cavium's FastLinQ SmartAN technology, which is designed to significantly reduce interoperability challenges in physical layer networks.

Further Features of the FastLinQ 41000 Series inlcude:

1.         10/25/40/50 Gigabit Ethernet connectivity across standard and OCP form factors.

2.         Stateless offloads for VxLAN, NVGRE and GENEVE.

3.         SmartAN to provide seamless 10/25 Gigabit Ethernet interoperability.

4.         Storage protocol offloads for iSCSI, FCoE, iSER, NVMe-oF and SMB Direct.

5.         Management across heterogeneous platforms with QConvergeConsole GUI and CLI.


Regarding the new products, Martin Hull, senior director product management at Arista Networks, said, "Arista… has partnered with Cavium to ensure availability of tested and interoperable solutions for hyperscale data centres… Cavium's FastLinQ 41000 Series of NICs and Arista’s portfolio of 25 Gbit/s leaf and spine systems deliver backward compatibility and investment protection with standards compliance".



          Exclusive Interview with Steve Douty President/CEO of Nexo, Inc.   

Exclusive Interview with Steve Douty President/CEO/Member of the Board of Directors of  Nexo, Inc. Nexo, Inc. (a private company) Disrupts the Movement of Big Data A New Era of Information Sharing Has Begun Last Friday I sat down with Steve Douty and to discuss the future of big file sharing, the expense and security surrounding big […]

The post Exclusive Interview with Steve Douty President/CEO of Nexo, Inc. appeared first on Live Trading News.


          Nexo Inc. (a private company) Disrupts the Movement of Big Data   

  Nexo, Inc. (a private company) Disrupts the Movement of Big Data A New Era of Information Sharing Has Begun $AMZN, $GOOGL, $IBM, $MSFT, $ORCL This discussion is about 1.5X longer than my regular columns on innovative, disruptive, beneficial technology, and fairly technical to boot, but I believe you will all find it compelling and […]

The post Nexo Inc. (a private company) Disrupts the Movement of Big Data appeared first on Live Trading News.


          Software Engineer II - Microsoft - Redmond, WA   
Providing lighting fast time-to-insight over big data is today’s hottest trend. Continuously dropping storage prices enable new scenarios everyday where data
From Microsoft - Sat, 03 Jun 2017 07:23:35 GMT - View all Redmond, WA jobs
          Senior Software Engineer - Microsoft - Redmond, WA   
Providing lighting fast time-to-insight over big data is today’s hottest trend. Continuously dropping storage prices enable new scenarios everyday where data
From Microsoft - Thu, 25 May 2017 01:20:46 GMT - View all Redmond, WA jobs
          Senior Sales Engineer   
CA-Santa Clara, CenturyLink (NYSE: CTL) is a global communications and IT services company focused on connecting its customers to the power of the digital world. CenturyLink offers network and data systems management, big data analytics, managed security services, hosting, cloud, and IT consulting services. The company provides broadband, voice, video, advanced data and managed network services over a robust 265,
          (IT) Data Architect   

Location: New York, NY   

Software Guidance & Assistance, Inc., (SGA), is searching for an Data Architect for a right to hire assignment with one of our premier financial clients in New York, NY. Responsibilities : We are looking for an experienced Technologist/Data Architect is responsible for ensuring that the data assets of an organization are supported by a data architecture that aids the organization in achieving its strategic goals. The data architecture should cover defining technical architectures and subject domain based design for operational/transactional data as well as analytic data in both Rational and NoSQL/Big Data technology environments. Besides the data modeling tasks, the data architect is also expected to contribute to the data processing tasks for data mapping, data conditioning, ETL/ELT, data delivery through java, python as well as metadata management system. Analyze functional and non-functional business requirements and translate into technical data requirements and create or update existing logical and physical data models. Create data models (logical) and DB designs (physical), that depict the business areas and systems. Design and develop a data processing techniques in Cassandra and Hadoop environments Design and implement solutions for data re-conciliation, data delivery as well as data forensics. Implement data technology solutions in the area of master data, metadata, and reference data Design and implement data analytics solution using analytics tools like dash boarding, reporting. Required Skills : BS in Computer Science or in related fields, higher degree preferred. 5+ year working experience in data modeling methodology and process using Erwin Data collection, curation, preparation and transformation Data warehousing, reporting and business intelligence, such as dash boarding Clear verbal and written communication; demonstrated ability to collaborate with peers from a variety of disciplines Working knowledge with RDBMS (DB2) as well as Cassandra and Hadoop Implementation skills such as in Java, Python, and Scripting Preferred Skills : Breadth and depth of skill to build and deliver scalable data and reporting solutions for Deep knowledge and experience with the Cassandra, related software tools, and performance optimization An obsession with delivering an exceptional customer experience and site resiliency Desire, initiative and resourcefulness to learn new tools, frameworks, languages SGA is a Certified Women's Business Enterprise (WBE) celebrating over thirty years of service to our national client base for both permanent placement and consulting opportunities. For consulting positions, we offer a variety of benefit options including but not limited to health & dental insurance, paid vacation, timely payment via direct deposit. Only those authorized to work for government entities will be considered for government roles. SGA is an EEO employer. We encourage Veterans to apply.
 
Type: Contract
Location: New York, NY
Country: United States of America
Contact: george wellington
Advertiser: Software Guidance & Assistance
Reference: NT17-02437

          Micron Technology beats Q3 earnings estimates   

Shares of Micron Technology Inc. rose slightly after-hours Thursday after the chip maker reported earnings for the third quarter. Micron reported net income of $1.6 billion, or $1.40 per share, after reporting a loss of $215 million, or 21 cents per share during the same period a year ago. Adjusted earnings per share came in at $1.62, above FactSet consensus of $1.52. Revenue for the quarter hit $5.6 billion, up from $2.9 billion in the year earlier and just above FactSet's $5.4 billion consensus. "The global trends taking shape today, including machine learning and big data analytics, are exciting and create significant opportunities for Micron," said Chief Executive Sanjay Mehrotra in a statement. "We are focused on positioning the company to realize these opportunities by investing in technology and products while also strengthening our balance sheet." Shares of Micron Technology have gained nearly 44% in the year to date, while the S&P 500 index is up 8% in the year.

Market Pulse Stories are Rapid-fire, short news bursts on stocks and markets as they move. Visit MarketWatch.com for more information on this news.


          Big Data Machine Learning: Patterns for Predictive Analytics   

We will start by discussing two phases. First is the training phase where you will learn a model from training data. Next is the predicting phase where you will use the model to predict the unknown or future outcome.

Refcardz are FREE cheat sheets made just for developers. It’s the easy way to stay on top of the newest technologies!



Request Free!

          Apache HBase: The NoSQL Database for Hadoop and Big Data   

Use HBase when you need random, real-time read/write access to your Big Data. The goal of the HBase project is to host very large tables — billions of rows multiplied by millions of columns — on clusters built with commodity hardware. HBase is an open-source, distributed, versioned, column-oriented store modeled after Google’s Bigtable. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of Hadoop and HDFS.

Refcardz are FREE cheat sheets made just for developers. It’s the easy way to stay on top of the newest technologies!



Request Free!

          Hoy dejo de estar premiado como Microsoft MVP: Adiós MVPs. Hola Comunidades Técnicas. It´s been Great!   
Ayer fue mi último día con el galardón de Most Valuable Professional que otorga Microsoft a profesionales técnicos que comparten su trabajo en las comunidades. Gente que invierte su tiempo en mantener un foro, un blog, que participa en grupos de debate de Facebook o que participa reuniones alrededor de unas cervezas para charlar de bots, de BigData, de seguridad informática, hacking o programación en cualquier lenguaje.

Figura 1: Adiós MVPs. Hola Comunidades Técnicas.

En mi caso han sido un porrón de años. En concreto desde el 1 de Julio del año 2004 al 30 de Junio de 2017. Pero es lo que sucede cuando te haces mayor, que todo se mide en una buena cantidad de años. Años que pasan volando mientras disfrutas con la tecnología, las cosas que haces, y las cosas que vives. Estar premiado como MVP por Microsoft nunca me supuso ninguna restricción para hacer o decir lo que quisiera sobre las tecnologías de la compañía, y de hecho he sido muy crítico o muy fan dependiendo de qué cosas. En esta charla contaba algunas de las anécdotas de más de una década como MVP.


Figura 2: Una década como MVP de Seguridad es una Barbaridad

Ahora se acabó mi tiempo como MVP, porque ya no me da la vida para seguir con el ritmo de eventos, artículos, participación en comunidades, que he llevado años atrás, y hay que valorar y dejar paso a los jóvenes MVP que hoy son premiados con este galardón. Aún recuerdo como fue el primer día, que lo refleja perfectamente este vídeo. Ser MVP te hace sentir genial...


Figura 3: I'm an Microsoft MVP and I feel Great!

Para mí fue una experiencia genial, rodeado de gente maravillosa con la que aprendí y de la que me enamoré. Gente con pasión por lo que hace, por la tecnología, por mejorar todo un poco más, por ayudar al que quiere aprender, por compartir conocimiento y experiencias. Gente con la que me reí y con la que, principalmente viví. Luis Franco, un MVP que siempre estuvo ahí conmigo.

A post shared by Chema Alonso (@chemaalonso) on

Hoy en día, el número de comunidades tecnológicas con que contamos es enorme. Goza de buena salud ver las comunidades de desarrolladores, de hacking o de Big Data, por citar algunas. Todas las semanas hay un montón de reuniones, publicaciones de artículos, estudios, conferencias, etcétera. Se ha creado una masa crítica de amantes de la tecnología que ayuda a que cada vez la tecnología avance más deprisa. Pero hace años no era así. Hay que agradecer a las herramientas de comunicación que tenemos que nos ayudan a generar estos grupos de interés sobre determinados temas, que nos van llevando un poco más allá en el camino del conocimiento.

Figura 5: Los MVPs del Calendario Tórrido del 2009. Yo soy uno de ellos (SIN GORRO), a otro le tocó la lotería del navidad, alguno está en Microsoft ahora, y otros... iban mucho y muy poco al gimnasio, pero todos son geniales.

Con los MVPs, hace muchos años, hacíamos lo que podíamos. Compartíamos y nos apuntábamos a todo. Hasta a desnudarse en un Calendario Tórrido para sacar fondos para causas sociales. Y siempre con la diversión. Aquí, dando una charla de técnicas de inyección en aplicaciones web disfrazados de los 4 Fantásticos. El malo, lógicamente, era el Doctor Doom (el que doctor que pone las inyecciones que era malo, malísimo).

Figura 6: Con otro MVP, Ricardo Varela, dando una charla sobre hacking Web.
Al loro el cinturón de Batman. Tope Cool el cross-over!

Para mí, pertenecer a la comunidad de los MVPs fue genial. Colaboré con muchos, aprendí de otros, socialicé, di charlas, me vi vídeos, llegué a vivir con alguno de ellos, y siempre, siempre, siempre, disfruté de una buena charla sobre tecnología o de un proyecto conjunto. Incluso en el lanzamiento de AURA trabajé con dos MVPs de solera.

A post shared by Chema Alonso (@chemaalonso) on

Hoy digo adiós al programa de MVPs, pero ni mucho a las comunidades técnicas, donde seguiré aportando mi granito de arena en lo que pueda. Con mi blog, con mis charlas puntuales, con algún libro, con algún artículo en alguna revista, o con una reunión en una bolera con jóvenes universitarios de la Universidad Autónoma de Madrid que se queden sin foto por no llevar gorro ;).

Saludos Malignos!

PD: Quiero dar las gracias a Alberto Amescua y Cristina González, que me cuidaron durante todos estos años en el grupo de MVPs, y a todos y cada uno de los que habéis sido, sois o seréis MVPs en cualquier rincón del mundo. Tenéis todo mi respeto, cariño y admiración.

          Barcelona: Changing the Game with Big Data. Registro abierto   
El próximo 13 de Julio llega a Barcelona el evento Changing the Game with Big Data, organizado por LUCA D3, la unidad de Telefónica que ayuda a las compañías a recorrer el camino hacia empresas Data-Driven. 

Figura 1: Changing the Game with Big Data en Barcelona

El registro está ya abierto, y tendrá lugar en el Auditorio en Torre Telefónica en Diagonal 00, sito en Plaça d'Ernest Lluch i Martin, 5, 08019 Barcelona. Este fue el lugar en el que presentamos por primera vez nuestra querida Aura, y es el lugar elegido para realizar el primer evento en Barcelona de LUCA D3, donde se podrán ver los productos y servicios que realiza Telefónica para clientes.

Figura 2: Ponentes de Changing the Game with Big Data en Barcelona

En este evento, además de poder ver casos concretos de usos de BigData para negocios como el transporte, el retail, el turismo, el mundo de la publicidad o las smartcities, se podrán ver casos concretos con clientes que los utilizan, contando en primera persona sus experiencias.

Figura 3: Agenda de Changing the Game with Big Data en Barcelona

Si quieres asistir, puedes hacerlo registrándote en la web, para reservar tu sitio. Nos vemos en Barcelona el próximo 13 de Julio.

Actualización: Contacta con hello@luca-d3.com para conseguir tu código de invitación

Equipo de LUCA D3

          M2Banking & Fintech Latam   

M2Banking & Fintech Latam

July 18, 2017 to July 20, 2017

In its 8th year, and 2nd year in San Francisco, the event brings together the key players of Latin America’s fintech ecosystem for 3 intensive days focusing on mobile money, payments and financial innovation in the region. The forum aims to connect Latin American decision makers with innovators from the Bay Area, US and international mobile community.

The conference will follow a case study approach, with the region’s innovators exploring the complex and fascinating dynamics of financial institutions as they navigate towards complete integration with the mobile world.

Topics of Discussion

  • Fintech ecosystem and mobile financial services in Latin America
  • Digital Wallet and innovative services for the unbanked, microfinance and microcredits;
  • Mobile money, payments and remittances in Latin America
  • Venture Capital and investing
  • Regulatory intelligence, benchmarks
  • Bitcoin &blockchain;
  • big data, location-based services and chatbots
  • and much more!

Use the promo code MMA20 to get 20% discount on all ticket types!

More Info


          Big Data Technical Architect   
MN-Minneapolis, Tavant is currently engaged in multiple, long-term Big Data projects. Our client in Minneapolis, MN is obsessed with Data Analytics. They are running at least 10 projects on these and more in pipeline. This Architect role is important to the process of expanding our Analytics footprint with them. This position requires a comprehensive background in analytics, ideally with text analytics experience
          Big Data Technical Architect   

          How universities can use big data to land grads careers   
BY ROB SPARKS, eCampus News Have faculty, administrators and advisors actually prepared their students for the “real world” and aligned programs, degrees and training with the job market? Without diminishing the quality of the academic program, have students made the right choices to fulfill their ambitions and aspirations and begin their contributions to society? For decades, [...]
          Toda la presión sobre Piñera: estudio muestra que candidatura del ex Presidente tiene la mayor cobertura de prensa antes de las primarias   
Se trata de una fuerza mediática que se atribuye al trabajo de su comando por mantenerlo en la agenda, pero también a los conflictos de interés que han incomodado su campaña, sostiene un estudio de big data elaborado por la Usach.
          SoftBank's AGOOP launches Crowd Map app for smartphones    
(Telecompaper) AGOOP, a member of SoftBank Group that provides location-based big data, has launched its 'Crowd Map' app for smartphones...
          Globetouch acquires Indian startup Teramatrix to leverage its IoT apps to support connected cars, autonomous driving   

This complements Globetouch’s global connectivity suite and IoT platform which manages the intersection of Big Data, Machine Learning and autonomous systems for enterprise customers Globetouch, a California-headquartered company which provides global connectivity solutions and IoT platforms, has acquired Gurgaon-based IoT solutions firm Teramatrix Technologies. The transaction details were not disclosed. Globetouch will leverage and integrate Teramatrix’s xFusion platform […]

The post Globetouch acquires Indian startup Teramatrix to leverage its IoT apps to support connected cars, autonomous driving appeared first on e27.


          Java/Microservices - (Ipswich)   
Hello, Principal Java/Microservices Software EngineersDuration : 6+ months contract to hireLocation : Ipswich, MARequirements:o Minimum 10 years of experience in specification, design, development, maintenance enterprise-scale mission critical distributed systems with demanding non-functional requirementso Bachelor's Degree in Computer Science, Computer Information Systems or related field of study. Master's Degree preferredo 8+ years of experience with SOA concepts, including data services and canonical modelso 8+ years of experience working with relational databaseso 8+ years of experience of building complex server side solution in Java and/or C#o 8+ years of experience in software development lifecycleo 3+ years of experience building complex solutions utilizing integration frameworks and ESBo Demonstrate strong knowledge and experience applying enterprise patterns to solving business problemsPreferred Qualifications:o Leadership experienceo Strong abilities troubleshooting and tuning distributed environments processing high volume of transactionso Familiarity with model driven architectureo Familiarity with BPM technologieso Experience with any of the following technologies: Oracle, MySQL, SQL Server, Linux, Windows, NFS, Netapp, Rest/SOAP, ETL, XML technologieso In depth technical understanding of systems, databases, networking, and computing environmentso Familiarity with NLP and search technologies, AWS cloud based technologies, Content Management systems, publishing domain, EA frameworks such as TOGAF and Zachmano 2+ years of experience building complex Big Data solutionso Excellent verbal, written and presentation skills with ability to communicate complex technical concepts to technical and non-technical professionalsRegards Pallavi781-791-3115 ( 468 )Java,Microservices,cloud,AWS,architect Source: http://www.juju.com/jad/000000009qiqw5?partnerid=af0e5911314cbc501beebaca7889739d&exported=True&hosted_timestamp=0042a345f27ac5dc0413802e189be385daf54a16310431f6ff8f92f7af39df48
          In The Future Our Current Views Of Personal Data Will Be Shocking   

The way we view personal data in this early Internet age will continue to change and evolve, until one day we are looking back at this period and find we are shocked regarding how we didn’t see people’s digital bits as their own, and something we should respect and protect the privacy and security of.

Right now my private, network shared, or even public posts are widely viewed as a commodity, something the platform operator, and other companies have every right to buy, sell, mine, extract, and generally do as they wish. Very few startups see these posts as my personal thoughts, they simply see the opportunity for generating value and revenue as part of their interests. Sure, there are exceptions, but this is the general view of personal data in this Internet age.

We are barely 20 years into the web being mainstream, and barely over five years into mobile phones being mainstream. We are only beginning to enter even more immersions of Internet in our lives via our cars, televisions, appliances, and much more. We are only getting going when it comes to generating and understanding personal data, and the impacts of technology on our privacy, security, and overall human well-being. What is going on right now will not stay the norm, and we are already seeing signs of pushback from humans regarding ownership of their data, as well as our privacy and security.

While technology companies and their investors seem all powerful right now, and many humans seem oblivious to what is going, the landscape is shifting, and I’m confident that humans will prevail, and there will be pushback that begins helping us all define our digital self, and reclaiming the privacy and security we are entitled to. When we look back on this period in 50 years we will not look favorably on companies and government agencies who exploited human’s personal data. We will see the frenzy over big data generation, accumulation, and treating it like a commodity, over something that belongs to a human as deeply troubling.

Which side of history are you going to be on?


          The Oil Industry Waking Up To Data Being The New Oil   

When you hang out in startup circles you hear the phrase, “data is the new oil” a lot. Getting rights to the mining and extracting, and generating revenue from data is big business, and VC, Hedge Funds, and even government are getting in on the game. Whether it is gathered from private or public sectors, or in your living room and pocket, everyone wants access to data.

One sign that this discussion is reaching new levels, is that the oil industry is talking about data being the new oil. That is right. I’m increasingly coming across stories about big data and the revenue opportunities derived from data when it comes to IoT, social, and many other trending sectors. The big oil supply chain has access to a lot of data to support its efforts, as well as generated from the exhaust of daily oil production to consumption–the opportunity is real man!

To entrepreneurs this shift is exciting I’m sure. To me, it’s troubling. Wall Street turning their sites to the data opportunity, and hedge funds getting in on the game worried me, but big oil being interested an even greater sign that things are reaching some extreme levels. It is one thing to use data is the new oil as a skeuomorph to find investment in your startup, or acquire new customers. It is another thing for the folks behind big oil to be paying attention–these are the same people who like to start wars to get at what they want.

Anyways, it is just one of many troubling signs emerging across the landscape. Many of my readers will dismiss as meaningless, but these discussions are just signs of an overall illness around how we see data, privacy, and security. Remember when we’d topple dictators to get at oil resources in the world? Well, welcome to the new world where you topple democracies if you have access to the right data resources.


          There's A Fight Brewing Between Palantir And The NYPD   
Big data helped New York's cops bust Bobby Shmurda. But as the NYPD's contract with tech giant Palantir comes to an end, things could get messy.
          Evaluating web PKIs, by Jiangshan Yu and Mark Ryan   
Certificate authorities serve as trusted parties to help secure web communications. They are a vital component for ensuring the security of cloud infrastructures and big data repositories. Unfortunately, recent attacks using mis-issued certificates show this model is severely broken. Much research has been done to enhance certificate management in order to create more secure and reliable cloud architectures. However, none of it has been widely adopted yet, and it is hard to judge which one is the winner. This chapter provides a survey with critical analysis on the existing proposals for managing public key certificates. This evaluation framework would be helpful for future research on designing an alternative certificate management system to secure the internet.
          Elasticsearch Evaluation White Paper Released: Promising for Big Data   
We believe that Elasticsearch is a product that everyone working in the field of big data will want to take a look at.
          Com big data, áreas de negócio ganham autonomia e descolam da TI   
Dados são cada vez mais fundamentais nas empresas e a necessidade de tê-los, e entendê-los, exige outra relação com a TI
          Katherine Glasheen is designing drones to think differently.   
Katherine Glasheen

Katherine GlasheenKatherine Glasheen has a nickname fit for an engineer: machine, and it is not just because it rhymes with her last name.

A second year aerospace PhD student, she has a drive to advance technology, and is conducting research on socially aware drones, a project that will become increasingly important with wider adoption of UAVs. Today, however, it is something that is future focused enough that even her advisor calls it, "kind of wild."

“The technology is developing faster than society can handle," Glasheen says. "One drone delivering a package in downtown Denver is challenging enough, but what about when there are hundreds of them? We need systems that are scalable and robust."

Her ideas have already earned a major stamp of approval. She was recently awarded a prestigious National Science Foundation Graduate Research Fellowship.

Kind of Wild

Her proposal calls for using internet data to infer local attitudes about drones.

"If the UAV could analyze news articles and comments on websites and knew people in the area it was traveling were uncomfortable with drones, it could deliberately avoid flying over places like schools, hospitals, and parks,” Glasheen says.

It was an intriguing possibility for her advisor, Associate Professor Eric Frew.

"The idea is to create an 'ethical drone' that understands and tries to respect local attitudes. It is a novel way to think about combining unmanned systems, big data, and artificial intelligence. I've not heard anyone suggest it before," Frew says.

Less Artificial, More Intelligence

To get there, Glasheen is first working on improving more conventional trip planning methods.

"For a delivery drone, the path it's following is preplanned and loaded before it takes off. That doesn't account for any variables it could encounter on the way where it would need to change course," Glasheen says.

What kind of variables? Think of the things you encounter driving to the grocery store. As humans, we can quickly react if a driver runs a red light. If we encounter traffic, we can take a different route.

While some drones have rudimentary obstacle avoidance systems, a quick YouTube search will show they are less than ideal in execution. In addition, obstacle avoidance cannot account for things like encroaching stormy weather.

Glasheen wants the drone to be able to change course and make adjustments midflight, but the AI is not the only problem. The kind of computer needed to process that much data is large and heavy, and would quickly turn a flying drone into a grounded paperweight.

Drones in the Cloud

That is where the cloud can come in come in. She sees a future where UAVs can regularly contact cloud systems to relay problems and determine solutions.

"The drone has a small brain, but there's a big brain in the cloud. If the drone could ping the cloud and asks for help, you can get a solution to safely navigate through an environment," Glasheen says.

The technology has great potential for the future. While delivery drones are often discussed as a public use, a UAV that can exchange data with the cloud could improve military reconnaissance and even weather forecasting.

"It's all so exciting. The field is evolving every day and you can see new applications," Glasheen says. "A lot of it still unknown, which makes some people uncomfortable, but for me it's thrilling."

none
          Entera Upgrades to Support Big Data with Latest Version of NXTera   

NXTera 6.5 adds Hadoop and Hive database framework support.

(PRWeb May 13, 2015)

Read the full story at http://www.prweb.com/releases/2015/04/prweb12680865.htm


          Filtragem anti-fraude: crie um panorama a partir de Big Data   
A capacidade de criar grandes conjuntos de dados complementa ferramentas de indicadores de fraude mais tradicionais
          Austin Richerd posted a blog post   
Austin Richerd posted a blog post

How I Passed 70-475 Exam with Valid 70-475 Dumps

For quite long, I’d been trying my luck with Microsoft in smearing for vital positions in major corporations through 70-475 Designing and Implementing Big Data Analytics Solutions exam, but it seems like the odds were not in my favor. I was intended to go with Microsoft Specialist 70-475 exam questions that moment I was totally blanked, how I would pass my Microsoft 70-475 exam. I went through numerous exam material providers, but couldn’t find any reliable material. After struggling hard, I suddenly loomed to (JustCerts). I never assumed that passing 70-475 exam dumps would be so easy. I scored 95% in my Microsoft Specialist 70-475 exam with the help of JustCerts updated Designing and Implementing Big Data Analytics Solutions exam material.  My Review on 70-475 Exam Practice Questions  While I was looking for the Microsoft Specialist 70-475 exam questions and answers, I could not find a lot of rationalized study material; I never assumed that Justcerts would happen to me as sanction. I was worried, how to go about Microsoft Specialist Certification exam. I had not many options to consider, so I gave it a shot and it actually worked.How I Prepared 70-475 Exam in One Week?  I finally passed Microsoft Specialist certification in the first try. The practice software of JustCerts greatly reduces the burden of preparation by offering you the absolute real questions that one can come across in the 70-475 Designing and Implementing Big Data Analytics Solutions exam. I downloaded the practice software soon after the purchase from website and started practicing 70-475 exam. All the exam questions were updated and accurate. The expert team prepares the exam material after analyzing the situations and needs to current curriculum.70-475 Exam - Recommendations Before buying the JustCerts product, I got a chance to go through the material and get familiar with the interface. In fact, when I purchased the study material for Microsoft Specialist 70-475 exam so, JustCerts kept me informing with the amendments made by companies and experts. Justcerts gave me 90 days free updates, which helped me to prepare for Microsoft 70-475 exam to make Microsoft Specialist certified.I can never thank enough to JustCerts for helping and accommodation me to prepare for Microsoft Specialist 70-475 certification exam. I can totally relate myself to this Designing and Implementing Big Data Analytics Solutions exam material now even in future I will only be relying on JustCerts to excel in my 70-475 certification exam with leaving no stone unturned. For more information please visit the link below:  https://www.justcerts.com/Microsoft/70-475-practice-questions.htmlGet Latest Microsoft 70-475  Exam Dumps For Best ResultsSee More

          BigDL Democratizes Deep Learning Innovation   

Deep learning—a subset of machine learning and a key technique of artificial intelligence (AI)—is the most rapidly growing area of AI innovation. Big data and deep learning are technologies cut from the same cloth. They’re both enabled by the explosion of data brought about by the digital world.  But to deal effectively with mountains of ...continue reading BigDL Democratizes Deep Learning Innovation

The post BigDL Democratizes Deep Learning Innovation appeared first on IT Peer Network.

Read more >

The post BigDL Democratizes Deep Learning Innovation appeared first on Blogs@Intel.


          Retail Sales Associate, Colorado Springs, CO   
CO-Colorado Springs, CenturyLink (NYSE: CTL) is a global communications and IT services company focused on connecting its customers to the power of the digital world. CenturyLink offers network and data systems management, big data analytics, managed security services, hosting, cloud, and IT consulting services. The company provides broadband, voice, video, advanced data and managed network services over a robust 265,
          Future-proofing 'big data' biological research depends on good digital identifiers   
(PLOS) 'Big data' research runs the risk of being undermined by the poor design of the digital identifiers that tag data. A group of worldwide researchers, led by Julie McMurry, at Oregon Health & Science University, has assembled a set of pragmatic guidelines to create, reference and maintain web-based identifiers to improve reproducibility, attribution, and scientific discovery. The guidance, publishing June 29 in the open access journal PLOS Biology helps address the frequent problems associated with persistent identifiers linked to scientific data.
          Americans still shopping in stores know they’re missing deals – and they don’t care   

shopping bags Kate Spade

Despite a looming retail apocalypse, nearly half of Americans of all ages still do most of their shopping in brick-and-mortars.

That's according to a poll from our partner, MSN, which also found that 17% of Americans shop mostly online, while 34% shop online and in stores equally.

MSN polls its readers, and then uses machine learning to model how a representative sample of the US would have responded, using big data, such as the Census. It's nearly as accurate as a traditional, scientific survey.

Interestingly, Americans aren't shopping in stores for low prices — only 3% say they get better deals there. Rather, they're trading savings for immediacy: 74% of people who prefer brick-and-mortars are doing it so they "can see and get items right away." Just 13% say getting help from a person is the biggest advantage of in-store shopping.

What's more, only 10% are heading to stores to avoid shipping costs. Even with premium membership services like Amazon Prime, which offers free two-day shipping and same-day delivery, it's clear that many Americans are prioritizing instant gratification — holding a product in hand and bringing it home moments later — over potential savings.

For those who prefer online shopping, 41% said it's more convenient and 24% said they find better deals.

 

 

 

But those shoppers who prefer an in-store experience may not be losing out on too much savings after all. While prices can be cheaper online, that's not always the case. According to an MIT study covered in the Boston Globe, online prices were lower than in-store prices 22% of the time. Ultimately, major US retailers "charge the same price for goods they sell online as compared with in stores about 70% of the time."

These findings come amid widespread retail closures in the US. Some of the nation's biggest department stores and shopping mall mainstays have shuttered hundreds of locations around the country to bolster online offerings, reports Business Insider's Hayley Peterson. Due to the rise of e-commerce, visits to shopping malls declined by 50% between 2010 and 2013, according to the real-estate research firm Cushman & Wakefield.

Join the conversation about this story »

NOW WATCH: A restaurant in NYC serves pizza topped with avocado


          Big Data Developer   
NJ-Jersey City, Job Summary & Responsibilities Compliance Technology provides technology solutions to help Compliance manage the firm's regulatory and reputational risks, and enables them to advise and assist the firm?s businesses. Data Analytics team part of Compliance Technology is seeking Java/Scala developers with deep knowledge of distributed computing principles. The candidate is responsible for design, dev
          Retail Sales Associate, Albuquerque, NM   
NM-Albuquerque, CenturyLink (NYSE: CTL) is a global communications and IT services company focused on connecting its customers to the power of the digital world. CenturyLink offers network and data systems management, big data analytics, managed security services, hosting, cloud, and IT consulting services. The company provides broadband, voice, video, advanced data and managed network services over a robust 265,
          Avoiding Big Data Failure – Let the Data Genie out of the Bottle!   

Opening Keynote at Data Platforms 2017 with Qubole Co-Founder/CEO Ashish Thusoo All businesses are now in the data business. Whether ingesting interaction clickstream and transactional…

The post Avoiding Big Data Failure – Let the Data Genie out of the Bottle! appeared first on Qubole.


          Vijayan Sugumaran – Computational Intelligence Applications in Business Intelligence and Big Data Analytics   

Name Product: Market price: Instructor:  File Size:18.31 MB Home:  https://www.udemy.com/certified-web-application-tester There are a number of books on computational intelligence (CI), but they tend to cover a broad range of CI...

The post Vijayan Sugumaran – Computational Intelligence Applications in Business Intelligence and Big Data Analytics appeared first on Free Download All The Guide In Business.


          Unlocking big data’s benefits with data visualisation   
Unlocking big data’s benefits with data visualisation
As every marketer knows, we have more data about our customers and how they interact with our brands at our fingertips than ever before. We have a deluge of real-time data flooding business, from a wide range of systems and sources—internal CRM databases, data managed by agencies, data from channels such as search, social, ad-servers [&hellip
          自然人憑證登入「健康存摺」系統 就醫紀錄自己查!   
您知道就醫紀錄要怎麼查?衛生福利部中央健康保險署開發「健康存摺(My Health Bank)」系統,只要使用自然人憑證認證身分,隨時可以查詢就醫紀錄,實在很方便!此系統也榮獲今(104)年內政部自然人憑證應用系統民眾應用組優良獎的肯定。
為推廣「自我健康管理」的觀念,提供民眾及時完整的健康資訊,及就醫時醫師的參考,在確保個人醫療隱私及便利醫療資訊提供的前提下,去(103)年10月「健康存摺」的網路服務正式上線,民眾只要登入健保署網站使用自然人憑證通過身分認證,提出申請後即可即時下載或列印個人的相關醫療紀錄與投保資料,讓醫療資訊更加透明,便利民眾掌握本身就醫紀錄。
目前「健康存摺」提供醫療及保險兩大類共9項資料,醫療類有一年期門診資料、一年期住診資料、過敏資料、二年期牙科健康存摺、器官捐贈及安寧緩和意願資料、預防接種存摺等6項;保險類有保險計費明細、保險費繳納明細、健保卡狀況及領卡紀錄等3項,未來將持續擴充項目內容,截至今(104)年5月底使用自然人憑證登入健康存摺系統的使用人次約近5萬人次,平均每月高達6,000次以上查詢。
「健康存摺」的推行最重要的是加強全民健康照護,全民的健康是政府最大的無形效益。「健康存摺」列入「行政院六大網路分身亮點計畫」之一,除了讓民眾透過自己的可攜式健康資料(My Data)做為自我健康管理外,進而達到促進醫療產業革新、科技產業多元發展及商業保險模式創新等政策方針,未來將新增檢驗(查)結果、出院病歷摘要、影像或病理報告、成人預防保健等資料,並透過大數據(big data)分析,即時回饋民眾健康相關資訊及提升照護效率。
自然人憑證除了「健康存摺」功能外,也具備網路報稅、線上申請地籍及戶籍謄本等多種功能,未來內政部將持續推廣各機關開發自然人憑證相關應用系統,讓自然人憑證用途更多元。
健保署趙瑞華科長專題演講-健康存摺

健保署趙瑞華科長專題演講-健康存摺

行政院科技會報執行秘書鐘嘉德頒發獎盃予健保署黃明輝組長

行政院科技會報執行秘書鐘嘉德頒發獎盃予健保署黃明輝組長


          DevOps Engineer - GlaxoSmithKline - Upper Providence, PA   
Cloudera Certified Hadoop Administrator preferred. Strong administrator experience on a Big Data Hadoop Ecosystem and components (Sqoop, Hive, Pig, Flume, etc....
From GlaxoSmithKline - Fri, 19 May 2017 18:39:00 GMT - View all Upper Providence, PA jobs
          Customer Service Technician   
OR-White City, CenturyLink (NYSE: CTL) is a global communications and IT services company focused on connecting its customers to the power of the digital world. CenturyLink offers network and data systems management, big data analytics, managed security services, hosting, cloud, and IT consulting services. The company provides broadband, voice, video, advanced data and managed network services over a robust 265,
          Gemeenten zetten big data in tegen criminaliteit   

Vijf gemeenten gaan samen met de politie, Belastingdienst en het OM big data inzetten om “patronen en structuren van georganiseerde criminaliteit” in kaart te brengen. Betrokkenen hebben daarvoor deze week ‘City Deal – Zicht op Ondermijning’ ondertekend. Big data-projecten De City Deal is een samenwerking tussen de gemeenten Amsterdam, Den Haag, Rotterdam, Tilburg en Utrecht, CBS, Nationale Politie, het [...]

Het bericht Gemeenten zetten big data in tegen criminaliteit verscheen eerst op Gemeente.nu.


          Microsoft fait fonctionner une « vraie » IA avec une simple Raspberry Pi   
Le développement « matériel » des technologies d’intelligence artificielle suit pour l’instant deux courants principaux; le premier courant, qui est aussi le plus important en terme de coûts, implique de très gros serveurs de calculs destinés à effectuer des traitements sur une masse considérable de données (Big Data); c’est la …

Lire la suite

Aimez KultureGeek sur Facebook, et suivez-nous sur Twitter



          Administrador/a de Sistemas Linux - Between Technology - Barcelona   
En BETWEEN seleccionamos y apostamos por el mejor talento dentro del sector tecnológico. Nos involucramos en una gran variedad de proyectos punteros, trabajando con las últimas tecnologías. Actualmente en BETWEEN contamos con un equipo de más de 350 personas. En el área de Desarrollo, abarcamos proyectos web y mobile, trabajamos en ámbitos como BI, IoT, Big Data e I+D. En el área de Operaciones implantamos proyectos de Service Desk, Infraestructuras IT y proyectos Cloud, entre otros. ...
          Grand Ventures' First Investment is in Big Data Startup Astronomer   
Grand Ventures, the West Michigan venture capital firm investing in early-stage tech startups, has closed its first deal. The firm...
          How Companies Use Big Data for Businesses   

Big Data for Businesses

Big data, generally speaking, is a cluster of a large data set that needs to be analyzed computationally. Encrypting figures reveals patterns, trends, and associates behaviors and interactions with its users.…
Continue Reading

The post How Companies Use Big Data for Businesses appeared first on AI and Sales Trends.


          Big Data Developer - Verizon - Burlington, MA   
Beyond powering America’s fastest and most reliable network, we’re leading the way in broadband, cloud and security solutions, Internet of Things and innovating...
From Verizon - Thu, 29 Jun 2017 10:58:16 GMT - View all Burlington, MA jobs
          Enrich Customer Insights With Unstructured Data   

Over the past several years, Forrester's research has written extensively about the age of the customer. Forrester believes that only the enterprises that are obsessed with winning, serving, and retaining customers will thrive in this highly competitive, customer-centric economy. But in order to get a full view of customer behavior, sentiment, emotion, and intentions, Information Management professionals must help enterprises leverage all the data at their disposal, not just structured, but also unstructured. Alas, that's still an elusive goal, as most enterprises leverage only 40% of structured data and 31% of unstructured data for business and customer insights and decision-making.

So what do you need to do to start enriching your customer insights with unstructured data ? First, get your yext analysis terminology straight. For Information Management pros, the process of text mining and text analytics should not be a black box, where unstructured text goes in and structured information comes out. But today, there is a lot of market confusion on the terminology and process of text analytics. The market, both vendors and users, often uses the terms text mining and text analytics interchangeably; Forrester makes a distinction and recommends that Information Management pros working on text mining/text analytics initiatives adopt the following terminology:

Read more
          Storage Field Day 13 – Wrap-up and Link-o-rama   
Disclaimer: I recently attendedStorage Field Day 13. My flights,accommodation and other expenses were paid for by Tech Field Day and Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed […]]> Disclaimer: I recently attendedStorage Field Day 13. My flights,accommodation and other expenses were paid for by Tech Field Day and Pure Storage. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don&#rsquo;t form part of my blog posts, but could influence future discussions.

This is a quick post to say thanks once again to Stephen, Tom and Claire,and the presenters at Storage Field Day 13 and Pure//Accelerate. I had a super funand educational time. For easy reference, here&#rsquo;s a list of the posts I did covering the events (they may not match the order of the presentations).

Storage Field Day – I’ll Be At Storage Field Day 13

Storage Field Day Exclusive at Pure//Accelerate 2017 – General Session Notes

Storage Field Day Exclusive at Pure//Accelerate 2017 – FlashBlade 2.0

Storage Field Day Exclusive at Pure//Accelerate 2017 – Purity Update

Storage Field Day 13 – Day 0

NetApp Doesn&#rsquo;t Want You To Be Special (This Is A Good Thing)

Dell EMC&#rsquo;s in the Midst of a Midrange Resurrection

X-IO Technologies Are Living On The Edge

SNIA&#rsquo;s Swordfish Is Better Than The Film

ScaleIO Is Not Your Father&#rsquo;s SDS

Dell EMC’s Isilon All-Flash Is Starting To Make Sense

Primary Data Attacks Application Ignorance

StorageCraft Are In Your Data Centre And In The Cloud

Storage Field Day 13 – (Fairly) Full Disclosure

 

Also, here&#rsquo;s a number of links to posts by my fellow delegates (in no particular order). They&#rsquo;re allverysmart people, and you should check out their stuff, particularly if you haven&#rsquo;t before. I&#rsquo;ll attempt tokeep this updated as more posts are published. But if it gets stale, the Storage Field Day 13and Storage Field Day Exclusive at Pure Accelerate 2017 landing pages will have updated links.

 

Alex Galbraith (@AlexGalbraith)

Storage Field Day 13 (SFD13) – Preview

 

Brandon Graves (@BrandonGraves08)

Delegate For Storage Field Day 13

Storage Field Day Is AlmostHere

 

Chris Evans (@ChrisMEvans)

Pure Accelerate: FlashArray Gets Synchronous Replication

 

Erik Ableson (@EAbleson)

 

Matthew Leib (@MBLeib)

Pure Storage Accelerate/Storage Field Day 13 – PreFlight

 

Jason Nash (@TheJasonNash)

 

Justin Warren (@JPWarren)

Pure Storage Charts A Course To The Future Of Big Data

 

Max Mortillaro (@DarkkAvenger)

See you at Storage Field Day 13 and Pure Accelerate!

Storage Field Day 13 Primer – Exablox

SFD13 Primer – X-IO Axellio Edge Computing Platform

Real-time Storage Analytics: one step further towards AI-enabled storage arrays?

 

Mike Preston (@MWPreston)

A field day of Storage lies ahead!

Primary Data set to make their 5th appearance at Storage Field Day

Hear more from Exablox at Storage Field Day 13

X-IO Technology A #SFD13 preview

Hear more from Exablox at Storage Field Day 13

SNIA comes back for another Storage Field Day

 

Ray Lucchesi (@RayLucchesi)

Axellio, next gen, IO intensive server for RT analytics by X-IO Technologies

 

Scott D. Lowe (@OtherScottLowe)

Backup and Recovery in the Cloud: Simplification is Actually Really Hard

The Purity of Hyperconverged Infrastructure: What&#rsquo;s in a Name?

 

Stephen Foskett (@SFoskett)

The Year of Cloud Extension

 

Finally, thanks again to Stephenand the team at Gestalt IT for making it all happen. It was an educational and enjoyable weekand I really valued the opportunity I was given to attend. Here’s a photo of the Storage Field Day 13 delegates.

[image courtesy of Tech Field Day]


          DATA SCIENTIST DATA ENGINEER FÜR BIG DATA   
Wachse an technischen Herausforderungen Gehe Deinen nächsten Karriereschritt als DATA SCIENTIST / …
          These Founders Turned An MIT Class Project Into A Leading Analytics Company   
Infinite Analytics Co-Founder and CEO Akash Bhatia shares the vision behind the company, the future of big data, and his plans to change how companies understand consumer insights.
          From big data to smart data: WFA & The Customer Framework webinar   

Further to a successful session at our IMC FORUM in London, Neil Woodcock and Nick Broomfield (The Customer Framework) share on how marketers can get to grips with data in order to empower and enable creative solutions. This webinar covers key tips, recommendations, opportunities, challenges and common pitfalls marketers should avoid.
          Big data: what multinational clients think   

Some highlights from research amongst WFA’s multinational member companies, conducted in partnership with The Customer Framework.
          Database Administrator / Architect - Morgan Stanley - New York, NY   
Datawarehouse and Big data lake architect , supporting an enterprise datawarehouse platform designed and developed to store , process, curate and distribute the...
From Morgan Stanley - Wed, 22 Mar 2017 15:13:32 GMT - View all New York, NY jobs
          Quality Tools Engineer - Kudelski SA - Mountain View, CA   
Experience on datawarehouse and Big Data architectures (Spark, Hadoop, Cassandra…). The Quality and Tools Engineer will join the Engineering Operations...
From Kudelski SA - Thu, 29 Jun 2017 00:25:24 GMT - View all Mountain View, CA jobs
          Минэк разрабатывает поправки для внедрении блокчейна на госслужбе   

29-06-2017

Минэкономразвития разрабатывает поправки в законодательство для использования на госслужбе технологии блокчейн, искусственного интеллекта и Big Data.


          PureLive Australia, August 2017   

Want to say hello to the new possible? Join Pure Storage at PureLive 2017.

PureLive 2017 visits Perth on 1 August, Sydney on 15 August, Melbourbe on 16 August and finishes up in Brisbane on 17 August.

  • Hear from businesses which have completely transformed themselves via A.I., Big Data and Cloud initiatives.
  • Learn how all-flash data centers should be designed to enable telemetry, real-time insights and machine learning capability to maximise the value of your investment.
  • Understand how NVMe will potentially disrupt the data management industry, and what you can do to prepare.

You can find out about the agenda here.


          Artificial Intelligence Python Software Engineer   
MA-Lexington, Solidus is searching for an Artificial Intelligence Python Software Engineer. The candidate will develop AI systems for large multi-sensor and open source data sets. Projects involve system design and architecture and the development of algorithms for machine learning, computer vision, natural language processing, and graph analytics implemented on enterprise big data architectures. The candidate
          Big data en la salud   
Para tener en cuenta lo que se viene en materia de nuevos abordajes de la salud, tengamos en cuenta lo siguiente. Según NetApp, empresa dedicada a brindar soluciones integradas de almacenamiento masivo de datos, el cuerpo huma­no produce hasta 150.000 millones de gigabytes de información describiendo el comportamiento de la presión arterial, colesterolemia, glucosa en … Seguir leyendo Big data en la salud
          Sr. Big Data Engineer   

          Administrador/a de Sistemas Linux - Between Technology - Barcelona   
En BETWEEN seleccionamos y apostamos por el mejor talento dentro del sector tecnológico. Nos involucramos en una gran variedad de proyectos punteros, trabajando con las últimas tecnologías. Actualmente en BETWEEN contamos con un equipo de más de 350 personas. En el área de Desarrollo, abarcamos proyectos web y mobile, trabajamos en ámbitos como BI, IoT, Big Data e I+D. En el área de Operaciones implantamos proyectos de Service Desk, Infraestructuras IT y proyectos Cloud, entre otros. ...
          Web Marketing Coordinator with AdWords Experience - StrongBox Data Solutions - Quebec   
Has basic knowledge of Adobe software (especially Photoshop, Illustrator and Indesign). Looking to be part of the future in Big Data management?... $40,000 - $50,000 a year
From StrongBox Data Solutions - Fri, 24 Mar 2017 20:57:05 GMT - View all Quebec jobs
          How Deliveroo Is Using Big Data And Machine Learning To Power Food Delivery   
Food delivery firm relies heavily on data and machine learning to increase efficiency
          Big Data is Transforming the Travel Industry   
In this contributed article, tech writer Rick Delgado, examines how the travel industry includes a wide range of businesses, such as rental car companies, hotels, airlines, tour operators, cruise lines and more. Each of these companies must find a way to improve the overall customer experience and to meet the unique needs of each customer, and big data assists with this process.
          Unravel Data Adds Native Support for Impala and Kafka   
Unravel Data, the Application Performance Management (APM) platform designed for Big Data, announced that it has integrated support for Cloudera Impala and Apache Kafka into its platform, allowing users to derive the maximum value from those applications. Unravel continues to offer the only full-stack solution that doesn’t just monitor and unify system-level data, but rather tracks, correlates, and interprets performance data across the full-stack in order to optimize, troubleshoot, and analyze from a single pane.
          Big Data Breakthrough: Process Mining   
In this special guest feature, Alexander Rinke, co-CEO and co-founder at Celonis, explains how big data – and more specifically process mining – can help organizations gain full transparency into their operations, in turn allowing them to improve margins, business agility and customer service while reducing operational costs.
          BigDL Democratizes Deep Learning Innovation   

Deep learning—a subset of machine learning and a key technique of artificial intelligence (AI)—is the most rapidly growing area of AI innovation. Big data and deep learning are technologies cut from the same cloth. They’re both enabled by the explosion of data brought about by the digital world.  But to deal effectively with mountains of ...continue reading BigDL Democratizes Deep Learning Innovation

The post BigDL Democratizes Deep Learning Innovation appeared first on IT Peer Network.

Read more >

The post BigDL Democratizes Deep Learning Innovation appeared first on Blogs@Intel.


          Sr Member of Technical Staff-Big Data/Analytics - Middletown, NJ   
We are looking for a Senior Member of Technical Staff to work in our Access Analytics group. In this role, you will be utilizing your skills to solve real world problems, such as predicting and improving customer care-related issues, improving customer experience with...
          星河融快:2017年全球大数据产业报告之海外篇   

作为该系列的开篇文章,本期我们将从宏观的角度带你观察大数据行业的整体生态结构,对大数据采集、数据的分布式存储与处理,以及在此基础之上的数据分析、可视化和在众多行业中的应用进行概述。其后的每篇文章我们都会挑选大约5个行业的数十家典型公司进行详细介绍,并会对其中一个重点行业进行逻辑的梳理与详细案例的剖析。那么首先我们就来说说大数据技术是如何产生的?


星河融快:2017年全球大数据产业报告之海外篇
第一 大数据的技术基础

早在1980年,著名未来学家托夫勒在其所著的《第三次浪潮》中就热情地将“大数据”称颂为 “第三次浪潮的华彩乐章”,这标志着人们首次对海量数据所能够产生的价值有了初步的了解。

但由于连接方式的局限,长期以来人们对于数据的应用大多以企业内部的商业智能为主,随着互联网、移动互联网的普及,企业终于能够直接与用户产生链接并获得大量的用户行为与消费等数据,大数据产业应用的轮廓才渐渐清晰。

2000年初Google为了实现对大量网页的信息抓取、存储,并完成索引的建立及排序功能,同时又希望降低硬件采购成本而逐渐摸索出了利用普通物理机实现的分布式存储、计算体系。这一技术以MapReduce及GFS而为人所熟知,借此大数据得以分布存储在多个数据库中,并进行大规模并发处理,解决了以往单一计算机存储能力不够,计算时间过长而不具备实用性的问题。

依据2003年底Google所发布的论文,前雅虎工程师开发出了类似的分布式存储计算技术Hadoop,随后围绕Hadoop产生了庞大的生态体系,逐渐使大数据基础架构日臻完善。

Hadoop功能包括从数据采集、存储、分析、转运、再到页面展示,完整涵盖了整个流程。例如HDFS实现了数据的分布式存储,HBase负责实现数据库的功能,Flume执行对数据的收集,Sqoop能够对数据进行转移、治理, MapReduce可以通过算法实现分布式计算,Hive则做数据仓库,Pig做数据流处理,Zookeeper实现了各节点间的反馈收集与负载平衡服务,Ambari能够让管理员了解架构整体的工作运行情况。


星河融快:2017年全球大数据产业报告之海外篇

Hadoop生态技术架构

而随着技术的发展,一些适应独特应用场景的数据库、计算处理等软件也越发丰富,例如非结构化数据库MongoDB就因为其较为强大的条件查询功能以及灵活的数据结构获得了广泛的应用;Spark则将Hadoop中的存储介质替换为闪存,而获得了百倍处理速度的增长,Databricks Cloud就是这一架构下的产品化服务。

除此之外大数据生态中还存在着很多的技术发展路径,其中MPP技术主要还是以关系型数据库为主和Hadoop技术目标类似,都为了将数据切分、独立计算后再汇总。相对于SQL on Hadoop,MPP具有数据优化程度高、计算速度快,擅长被用于进行交叉分析等优点,适合企业进行数据分析使用,但其扩展性相对Hadoop来说较弱,一般在10个节点以上便丧失了计算优势,并且由于非开源架构导致其对特定硬件依赖程度较高。

采用MPP存储模式的代表性公司有Teradata,能够通过进行企业数据分析帮助员工减轻大数据处理的精力消耗与费用成本,使企业能够更加专注于业务运营。在传统数据库公司与意图进入数据库市场的企业服务公司(例如SAP)掀起的收购热潮中,Teradata是目前市场仅存的几家大型独立数据分析公司之一。

第二 大数据的数据来源

2011年麦肯锡发布了一份题为“Big Data: The Next Frontier for Innovation, Competition and Productivity”的报告,里面提到美国拥有1000人以上规模的公司平均存储了超过200T的数据,如果对数据进行价值挖掘将激发很多行业及公司的潜力,这一报告标志了商业领域大数据热潮的开端,也使企业服务软件成为了大数据最初的数据源。

随着存储及计算能力的加强和国内大数据产业的兴起,部分从业者在看到行业巨大前景的同时也意识到了国内数据资源的缺乏,由于民生、电信、交通、电力等具有很高价值的数据都掌握在政府及大型国企中并不开放,如何获取数据源成为了比如何提升数据处理方法更大的问题。

目前国内能够进行脱敏并使用的市场数据的来源主要还是集中在手机、PC等单一渠道与场景中,TalkingData、友盟,以及艾瑞、易观等数据分析及咨询机构很大程度上依赖着这些资源,却也被这些资源所局限。而由于政府数据的敏感性,仅有少数机构能够对接政府数据资源。因此预计随着对数据需求的日益强烈以及数据资源价值被渐渐接受,政府数据资源将会成为数据源的重要组成部分。

而更大范围的数据采集工作将会依托于物联网领域。我们在《即将被281亿个传感器包围,你却还没弄懂物联网技术?》中曾讲到,预计2020年我们将会被281亿个传感器包围,本月27号中国联通也宣布截至目前其物联网联通数量已超过5000万个。可以预见的是,在消费者视角内,未来衣食住行等方方面面都将会配备物联网设备实时采集数据,而采集来的数据将会让商家提供更优质、甚至是定制化的服务,形成双赢。而在工业领域,物联网所采集的大数据也将发挥很大的作用,形成良性循环。


星河融快:2017年全球大数据产业报告之海外篇
同样随着数据样本与采集渠道的丰富,针对数据采集过程、数据转换与传送和数据存储环节的服务也已经有了很大的发展,Informatica及Mulesoft就是多渠道数据的集成与数据治理行业中的代表性企业。 第三 大数据的分析及可视化

在有了足够的存储与计算能力,并获得了大量的数据后,数据分析产业的发展水到渠成。目前通用性的数据分析行业,主要有数据分析、数据分析可视化、大数据检索,以及延伸出的数据服务平台、商业智能分析及大数据预测与咨询这6大类业务。

数据分析的内容将会在第二及第三篇文章中详细介绍,今天仅介绍一下数据分析的整体情况,及未来可能的发展方向。

大数据分析的出现,对企业而言最大的价值就是能够将大量沉淀的用户行为数据、消费数据、企业服务软件中的数据进行整合,并通过对这些数据的分析来优化产品设计、价格的制定和销售方法的提升,同时降低企业内部运转的成本提高运营效率,例如Pentho通过抓取企业服务软件(主要为SAP)中的各类数据并挖掘及分析,最终能够帮助企业节约大量的报表制作时间,并让管理者能够实时看到企业的运行情况。


星河融快:2017年全球大数据产业报告之海外篇
同样对于电信、电力以及交通等专业领域的企业来说,通过收集用户数据,可以分析并预测未来的需求,提前对价格进行实时智能调节,并合理分配负载,从而实现利润的最大化并保证运行的安全。
星河融快:2017年全球大数据产业报告之海外篇
而对舆情数据的分析能够帮助企业及时了解市场情绪,并快速迭代自己的产品与服务,对于金融企业来说也可以快速获知最新动态避免因为信息不对称而暴露于风险中。例如Datameer提供的数据分析引擎就能够实时监测公共消息,检测其语言和传播方式,使用户能够早于媒体报道获得最新资讯,并通过可视化的方式使用户轻松快速上手。

大数据可视化,则是建立在大数据分析之上的,让人们能够更加便捷的理解数据分析结果的手段。大多数提供数据可视化业务的公司都将其作为对数据分析的延伸业务,例如Bottlenose在进行数据分析自动化业务的同时,提供对社交媒体分析的“声纳图”,能够让用户对复杂的关系及逻辑线条一目了然,提升了用户对其数据分析业务的采纳程度。


星河融快:2017年全球大数据产业报告之海外篇
预计随着数据分析手段与方法的不断升级,数据的可视化工作将成为重点方向,将日益复杂化的数据分析结果与人相连接将会面临技术不断的挑战。 第四 大数据的行业应用

大数据技术已经被视为了未来经济生活中的基础设施,这意味着几乎全部行业都能够在大数据分析技术之上获得经济效率的提升。星河研究院此次将大数据应用的研究范围覆盖到了20多个行业,包含电子商务、媒体营销、物流、企业服务、教育、汽车、金融科技等诸多产业,这一部分行业与公司的介绍将会放在第四到第七篇文章中。

在销售行业中,通过输入客户的性格、穿搭习惯、所处行业及历史销售数据等信息,销售员将会被大数据分析告知,何时给哪一位客户打电话获得订单的概率最高;在品牌形象建立中,Persado能够依据市场情绪的分析,写出与用户能够产生共鸣的文案从而获取消费者好感;法律行业中Ravel能够“阅读”过去数十万判决案例,针对用户输入的案件给出判决概率预测,帮助律师制定辩护策略,而长期来看法律大数据企业很有可能取代大部分初级律师;同样在零售、广告、医疗等诸多领域,大数据技术都能通过分析数据内在的关系而帮助用户实现购买预测、受众精准投放以及病情辅助判断等功能。大数据的行业应用精彩纷呈,远不止上文所提到的这些,接下来的文章中我们会逐一展现大数据应用的神奇。

第五 大数据成为AI产业的燃料

人工智能技术一直是科学家与技术人员的追求,但其发展并不是一帆风顺。例如最初的自然语言识别技术中,科学家希望通过语法规则使计算机理解语义从而实现智能化,但显示证明这一路径并不可行,其后依据大量数据样本的统计方法才有效的提升了自然语言处理的准确度并逐渐达到可用水平。

如今随着计算技术与数据量的提升,大数据能够带给我们的福利已经不仅限于资料的查找,识别语言、视觉的AI技术提供给我们的,除了经常看到的“个人助理”和动态美颜等功能外,仿照大脑结构进行写作、自动记录会议纪要、情绪识别与性格分析,甚至是视频内容的搜索等功能都能够对商业及产业起到较大的推动作用。

鸣谢:王刚

注:

Hadoop, 由Apache基金会所开发的分布式系统基础架构

HDFS是Hadoop中的分布式文件系统,适合运行在通用硬件设备上,具备高度容错性,能提供高吞吐量的数据访问,非常适合大规模数据集上的应用。

MapReduce是一种编程模型,用于大规模数据集(大于1TB)的并行运算,极大地方便了编程人员在不会分布式并行编程的情况下,将自己的程序运行在分布式系统上。

MPP,Massively Parallel Processing,意为大规模并行处理系统,这样的系统是由许多松耦合处理单元组成的,每个单元内的CPU都有自己私有的资源,在每个单元内都有操作系统和管理数据库的实例复本。

SAP是全球最大的企业管理和协同化商务解决方案供应商、全球第三大独立软件供应商,总部位于德国。

GFS是Google开发的可扩展分布式文件系统,用于大型的、分布式的、对大量数据进行访问的应用,能够运行于普通硬件上,并提供容错功能。

自 36kr



星河融快:2017年全球大数据产业报告之海外篇

          How Big is Big Data   

The post How Big is Big Data appeared first on The Learning Catalyst.


          What is Big Data   

The post What is Big Data appeared first on The Learning Catalyst.


          Data Engineer with Scala/Spark and Java - Comtech LLC - San Jose, CA   
Job Description Primary Skills: Big Data experience 8+ years exp in Java, Python and Scala With Spark and Machine Learning (3+) Data mining, Data analysis
From Comtech LLC - Fri, 23 Jun 2017 03:10:08 GMT - View all San Jose, CA jobs
          Big Data Developer - Verizon - Burlington, MA   
Beyond powering America’s fastest and most reliable network, we’re leading the way in broadband, cloud and security solutions, Internet of Things and innovating...
From Verizon - Thu, 29 Jun 2017 10:58:16 GMT - View all Burlington, MA jobs
          Database Administrator / Architect - Morgan Stanley - New York, NY   
Datawarehouse and Big data lake architect , supporting an enterprise datawarehouse platform designed and developed to store , process, curate and distribute the...
From Morgan Stanley - Wed, 22 Mar 2017 15:13:32 GMT - View all New York, NY jobs
          Quality Tools Engineer - Kudelski SA - Mountain View, CA   
Experience on datawarehouse and Big Data architectures (Spark, Hadoop, Cassandra…). The Quality and Tools Engineer will join the Engineering Operations...
From Kudelski SA - Thu, 29 Jun 2017 00:25:24 GMT - View all Mountain View, CA jobs
          iVEDiX to Present On Big Data, BI for CIO Roundtable of Western New York   

The presentation will focus on Big Data, and how organizations can determine the best way to develop BI initiatives that meet their needs.

(PRWeb April 29, 2014)

Read the full story at http://www.prweb.com/releases/2014/04/prweb11804570.htm


          (USA-DE-Dover) Regional Sales Manager   
**About Us:** GE is the world's Digital Industrial Company, transforming industry with software-defined machines and solutions that are connected, responsive and predictive. Through our people, leadership development, services, technology and scale, GE delivers better outcomes for global customers by speaking the language of industry. GE offers a great work environment, professional development, challenging careers, and competitive compensation. GE is an Equal Opportunity Employer at http://www.ge.com/sites/default/files/15-000845%20EEO%20combined.pdf . Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law. **Role Summary:** Job Overview: In this strategic Sales position, the Regional Sales Manager – Digital Sales Direct will be responsible for meeting and exceeding an orders operating plan in the west region across a range of power utilities and industry customers, including identifying sales opportunities within existing and new accounts, prospecting, strategy planning, executive relationship development, and deal closures. **Essential Responsibilities:** In this role, you will: + Develop a solution selling framework for GE business — tailor deal lifecycle, enablement deliverables, required resources, technology support, maturity model, sales methodology etc. + Establish a deep understanding of customers’ business needs by creating value to customers for our solution footprint. + Add value to the customer’s business and maintain a goal oriented approach to the business partnership. + Demonstrate to customers how they benefit by partnering with GE and how our solutions deliver results. + Aggressively develop and drive a sustainable commercial and solution strategy across multiple customer divisions that is aligned to the agreed account goals. + Develop and execute an Account Playbook that formalizes the “go high / go low” strategy for the Enterprise account. Where applicable, develop a joint Governance process with executive sponsorship that aligns along the following pillars – Commercial, Product/Technology, Implementation and Support. + Leverage the “Big GE” by coordinating across multiple GE divisions to solve customer challenges and enhance value and loyalty through the introduction of GE Corporate programs. (e.g. Industrial Internet, Minds + Machines) + Analyzes sales pipeline and maintains an array of opportunities to ensure that sales goals are achieved. + Actively grow and maintain a multi-year account plan that will be shared globally with parts of our business including Marketing, Product Management, Sales, Professional Services, and the Development teams to ensure coordination across the business. + Ensuring a Professional Sales Experience for customers during all aspects of sales process and touch points including: Formal meeting agendas, formal follow-up stating sequence of events and next steps in writing, and issue resolution in a timely fashion. + Formulates the winning proposals based on a cohesive strategy that leverages deep knowledge of industry, customer and GE product. **Qualifications/Requirements:** Basic Qualifications: + Bachelor’s Degree in business, marketing or related discipline + 12+ years of software industry experience minimum with proven track record Eligibility Requirements: + Legal authorization to work in the U.S. is required. We will not sponsor individuals for employment visas, now or in the future, for this job. + Any offer of employment is conditioned upon the successful completion of a background investigation and drug screen + Must be able to travel 50-70% of the time **Desired Characteristics:** + Works with individuals across the GE businesses on how to use Big Data to collect and analyze market information as well as how to present analysis and recommendations to drive strategic commercial decisions + Proactive in seeking out new digital platforms to drive deeper connections with customers – such as heat mapping existing relationships on LinkedIn to identify new sales opportunities, active in industry groups/blogs to gain exposure to target audiences and viewed as a domain expert + Develops acceptable strategies to mitigate risks triggered in RFP's and/or customers' T&Cs while meeting GE business objectives. + Leads the implementation of economic value selling throughout customer organization. + Thoroughly analyzes data to identify trends and issues that translate into a plan for the customer with some connection to seemingly independent problems. + Develops acceptable mitigation strategies that consider T&Cs of customers, competition and partners and key differentiators while also meeting business objectives. + Identifies and prioritizes critical GE resources needed to further the sales effort, negotiating with stakeholders for utilization.Technical Expertise: + Establishes trust and empathy as an advisor to the client; Works collaboratively in pursuit of discovery to define a desired business outcome while also uncovering unknown business outcomes the client has not previously considered; Ensures that a plan is laid out to accomplish all outcomes. + Proactively identifies pipeline risks and develops mitigation plans; Proactively shares 'best practices' to improve pipeline efficiency; Helps to develop sales team relationships with key contacts. + Able to take products, services, solutions knowledge and connect them to customers’ objectives to develop differentiated opportunities for GE; Draws upon non-traditional solutions; Constantly thinks out of the box & outside domain of expertise to develop creative solutions that meet ongoing customer needs.Business Acumen: + Leads the implementation of economic value selling throughout customer organization; Offers assistance and input to others across GE on this topic. + Communicates vertical expertise or future trends within the power utility and industries that drives certain benefits/challenges; Is seen as part of the customer’s team rather than outsider; Utilizes this dialogue to narrow and refine the customers’ objectives or “top-of-mind” thoughts in order to start a joint-brainstorming of potential solutions or to identify industry benchmarks. + Viewed as a thought leader by strategic customers, invited to advise customers based on GE solution knowledge and industry expertise; Brokers introductions and relationship handoffs with customer C-Suite to other GE team members. + Able to use a variety of financial data in building a broad perspective of company and customer business within their respective industry and markets + Understands the financial implication of different value drivers and creatively applies that understanding to impact company/customer metrics, i.e. revenue, profitability, market share. etc.Leadership: + Establishes & communicates team members' roles in relation to their function and data; Shares knowledge, power and credit, establishing trust, credibility, and goodwill; Coordinates role responsibilities with that of others to achieve mutual goals; Encourages groups to work together to efficiently resolve problems. + Able to consistently lead the process to develop winnable strategies; Creatively uses resources to anticipate and solve problems, resulting in innovative solutions that result in customer and GE satisfaction, and finds alternatives beyond the obvious; Keeps a broad perspective on the customer relationship and potential opportunities to increase customer loyalty.\#DTR **Locations:** United States; California, Oregon, Washington; Redmond, San Ramon, West Coast CitiesGE offers a great work environment, professional development, challenging careers, and competitive compensation. GE is an Equal Opportunity Employer at http://www1.eeoc.gov/employers/upload/eeoc_self_print_poster.pdf . Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law.GE will only employ those who are legally authorized to work in the United States for this opening. Any offer of employment is conditional upon the successful completion​ of a background investigation and drug screen.
          CCNA® & CCNP®: Routing & Switching Certification for $39   
Study to Become A Certified Network Engineer with Cisco Systems
Expires December 26, 2017 23:59 PST
Buy now and get 90% off

KEY FEATURES

Cisco systems are some of the most commonly used in enterprise around the world, and it's vital that aspiring network engineers become familiar with Cisco Certified Network Associate (CCNA) and Cisco Certified Network Professional (CCNP) protocols in order to make it in the industry. This training will help you maximize your foundational networking knowledge, all while preparing you to ace the CCNA® and CCNP® certification exams.

  • Access 55 hours of content 24/7
  • Learn to install, configure, operate, & troubleshoot medium-size routed & switched networks
  • Manage & optimize network systems & focus on network infrastructure
  • Coordinate collaboratively w/ network specialists on advanced security, voice, wireless, & video solutions
  • Understand how to design, apply, authenticate, & troubleshoot local & wide-area enterprise networks
  • Fortify your knowledge w/ 55 chapter-end quizzes, 2 CCNA® & CCNP® simulation exams, & a downloadable ebook

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: 1 year
  • Access options: web streaming, mobile streaming, download for offline viewing
  • Certification of completion included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

The online courses at Certs School give folks the chance to throw their careers into overdrive without ever leaving their cubicle. Designed to let students learn at their own pace, the courses give people the chance to learn everything from analyzing big data to using business tools such as Salesforce. Every course is designed by industry insiders with years of experience. For more details on this course and instructor, click here.

          The Ultimate Data Infrastructure Architect Bundle for $36   
From MongoDB to Apache Flume, This Comprehensive Bundle Will Have You Managing Data Like a Pro In No Time
Expires June 01, 2022 23:59 PST
Buy now and get 94% off

Learning ElasticSearch 5.0


KEY FEATURES

Learn how to use ElasticSearch in combination with the rest of the Elastic Stack to ship, parse, store, and analyze logs! You'll start by getting an understanding of what ElasticSearch is, what it's used for, and why it's important before being introduced to the new features of Elastic Search 5.0.

  • Access 35 lectures & 3 hours of content 24/7
  • Go through each of the fundamental concepts of ElasticSearch such as queries, indices, & aggregation
  • Add more power to your searches using filters, ranges, & more
  • See how ElasticSearch can be used w/ other components like LogStash, Kibana, & Beats
  • Build, test, & run your first LogStash pipeline to analyze Apache web logs

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Ethan Anthony is a San Francisco based Data Scientist who specializes in distributed data centric technologies. He is also the Founder of XResults, where the vision is to harness the power of data to innovate and deliver intuitive customer facing solutions, largely to non-technical professionals. Ethan has over 10 combined years of experience in cloud based technologies such as Amazon webservices and OpenStack, as well as the data centric technologies of Hadoop, Mahout, Spark and ElasticSearch. He began using ElasticSearch in 2011 and has since delivered solutions based on the Elastic Stack to a broad range of clientele. Ethan has also consulted worldwide, speaks fluent Mandarin Chinese and is insanely curious about human cognition, as related to cognitive dissonance.

Apache Spark 2 for Beginners


KEY FEATURES

Apache Spark is one of the most widely-used large-scale data processing engines and runs at extremely high speeds. It's a framework that has tools that are equally useful for app developers and data scientists. This book starts with the fundamentals of Spark 2 and covers the core data processing framework and API, installation, and application development setup.

  • Access 45 lectures & 5.5 hours of content 24/7
  • Learn the Spark programming model through real-world examples
  • Explore Spark SQL programming w/ DataFrames
  • Cover the charting & plotting features of Python in conjunction w/ Spark data processing
  • Discuss Spark's stream processing, machine learning, & graph processing libraries
  • Develop a real-world Spark application

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Rajanarayanan Thottuvaikkatumana, Raj, is a seasoned technologist with more than 23 years of software development experience at various multinational companies. He has lived and worked in India, Singapore, and the USA, and is presently based out of the UK. His experience includes architecting, designing, and developing software applications. He has worked on various technologies including major databases, application development platforms, web technologies, and big data technologies. Since 2000, he has been working mainly in Java related technologies, and does heavy-duty server-side programming in Java and Scala. He has worked on very highly concurrent, highly distributed, and high transaction volume systems. Currently he is building a next generation Hadoop YARN-based data processing platform and an application suite built with Spark using Scala.

Raj holds one master's degree in Mathematics, one master's degree in Computer Information Systems and has many certifications in ITIL and cloud computing to his credit. Raj is the author of Cassandra Design Patterns - Second Edition, published by Packt.

When not working on the assignments his day job demands, Raj is an avid listener to classical music and watches a lot of tennis.

Designing AWS Environments


KEY FEATURES

Amazon Web Services (AWS) provides trusted, cloud-based solutions to help businesses meet all of their needs. Running solutions in the AWS Cloud can help you (or your company) get applications up and running faster while providing the security needed to meet your compliance requirements. This course leaves no stone unturned in getting you up to speed with administering AWS.

  • Access 19 lectures & 2 hours of content 24/7
  • Familiarize yourself w/ the key capabilities to architect & host apps, websites, & services on AWS
  • Explore the available options for virtual instances & demonstrate launching & connecting to them
  • Design & deploy networking & hosting solutions for large deployments
  • Focus on security & important elements of scalability & high availability

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Wayde Gilchrist started moving customers of his IT consulting business into the cloud and away from traditional hosting environments in 2010. In addition to consulting, he delivers AWS training for Fortune 500 companies, government agencies, and international consulting firms. When he is not out visiting customers, he is delivering training virtually from his home in Florida.

Learning MongoDB


KEY FEATURES

Businesses today have access to more data than ever before, and a key challenge is ensuring that data can be easily accessed and used efficiently. MongoDB makes it possible to store and process large sets of data in a ways that drive up business value. Learning MongoDB will give you the flexibility of unstructured storage, combined with robust querying and post processing functionality, making you an asset to enterprise Big Data needs.

  • Access 64 lectures & 40 hours of content 24/7
  • Master data management, queries, post processing, & essential enterprise redundancy requirements
  • Explore advanced data analysis using both MapReduce & the MongoDB aggregation framework
  • Delve into SSL security & programmatic access using various languages
  • Learn about MongoDB's built-in redundancy & scale features, replica sets, & sharding

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Daniel Watrous is a 15-year veteran of designing web-enabled software. His focus on data store technologies spans relational databases, caching systems, and contemporary NoSQL stores. For the last six years, he has designed and deployed enterprise-scale MongoDB solutions in semiconductor manufacturing and information technology companies. He holds a degree in electrical engineering from the University of Utah, focusing on semiconductor physics and optoelectronics. He also completed an MBA from the Northwest Nazarene University. In his current position as senior cloud architect with Hewlett Packard, he focuses on highly scalable cloud-native software systems.

Learning Hadoop 2


KEY FEATURES

Hadoop emerged in response to the proliferation of masses and masses of data collected by organizations, offering a strong solution to store, process, and analyze what has commonly become known as Big Data. It comprises a comprehensive stack of components designed to enable these tasks on a distributed scale, across multiple servers and thousand of machines. In this course, you'll learn Hadoop 2, introducing yourself to the powerful system synonymous with Big Data.

  • Access 19 lectures & 1.5 hours of content 24/7
  • Get an overview of the Hadoop component ecosystem, including HDFS, Sqoop, Flume, YARN, MapReduce, Pig, & Hive
  • Install & configure a Hadoop environment
  • Explore Hue, the graphical user interface of Hadoop
  • Discover HDFS to import & export data, both manually & automatically
  • Run computations using MapReduce & get to grips working w/ Hadoop's scripting language, Pig
  • Siphon data from HDFS into Hive & demonstrate how it can be used to structure & query data sets

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Randal Scott King is the Managing Partner of Brilliant Data, a consulting firm specialized in data analytics. In his 16 years of consulting, Scott has amassed an impressive list of clientele from mid-market leaders to Fortune 500 household names. Scott lives just outside Atlanta, GA, with his children.

ElasticSearch 5.x Cookbook eBook


KEY FEATURES

ElasticSearch is a Lucene-based distributed search server that allows users to index and search unstructured content with petabytes of data. Through this ebook, you'll be guided through comprehensive recipes covering what's new in ElasticSearch 5.x as you create complex queries and analytics. By the end, you'll have an in-depth knowledge of how to implement the ElasticSearch architecture and be able to manage data efficiently and effectively.

  • Access 696 pages of content 24/7
  • Perform index mapping, aggregation, & scripting
  • Explore the modules of Cluster & Node monitoring
  • Understand how to install Kibana to monitor a cluster & extend Kibana for plugins
  • Integrate your Java, Scala, Python, & Big Data apps w/ ElasticSearch

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Alberto Paro is an engineer, project manager, and software developer. He currently works as freelance trainer/consultant on big data technologies and NoSQL solutions. He loves to study emerging solutions and applications mainly related to big data processing, NoSQL, natural language processing, and neural networks. He began programming in BASIC on a Sinclair Spectrum when he was eight years old, and to date, has collected a lot of experience using different operating systems, applications, and programming languages.

In 2000, he graduated in computer science engineering from Politecnico di Milano with a thesis on designing multiuser and multidevice web applications. He assisted professors at the university for about a year. He then came in contact with The Net Planet Company and loved their innovative ideas; he started working on knowledge management solutions and advanced data mining products. In summer 2014, his company was acquired by a big data technologies company, where he worked until the end of 2015 mainly using Scala and Python on state-of-the-art big data software (Spark, Akka, Cassandra, and YARN). In 2013, he started freelancing as a consultant for big data, machine learning, Elasticsearch and other NoSQL products. He has created or helped to develop big data solutions for business intelligence, financial, and banking companies all over the world. A lot of his time is spent teaching how to efficiently use big data solutions (mainly Apache Spark), NoSql datastores (Elasticsearch, HBase, and Accumulo) and related technologies (Scala, Akka, and Playframework). He is often called to present at big data or Scala events. He is an evangelist on Scala and Scala.js (the transcompiler from Scala to JavaScript).

In his spare time, when he is not playing with his children, he likes to work on open source projects. When he was in high school, he started contributing to projects related to the GNOME environment (gtkmm). One of his preferred programming languages is Python, and he wrote one of the first NoSQL backends on Django for MongoDB (Django-MongoDBengine). In 2010, he began using Elasticsearch to provide search capabilities to some Django e-commerce sites and developed PyES (a Pythonic client for Elasticsearch), as well as the initial part of the Elasticsearch MongoDB river. He is the author of Elasticsearch Cookbook as well as a technical reviewer of Elasticsearch Server-Second Edition, Learning Scala Web Development, and the video course, Building a Search Server with Elasticsearch, all of which are published by Packt Publishing.

Fast Data Processing with Spark 2 eBook


KEY FEATURES

Compared to Hadoop, Spark is a significantly more simple way to process Big Data at speed. It is increasing in popularity with data analysts and engineers everywhere, and in this course you'll learn how to use Spark with minimum fuss. Starting with the fundamentals, this ebook will help you take your Big Data analytical skills to the next level.

  • Access 274 pages of content 24/7
  • Get to grips w/ some simple APIs before investigating machine learning & graph processing
  • Learn how to use the Spark shell
  • Load data & build & run your own Spark applications
  • Discover how to manipulate RDD
  • Understand useful machine learning algorithms w/ the help of Spark MLlib & R

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Krishna Sankar is a Senior Specialist—AI Data Scientist with Volvo Cars focusing on Autonomous Vehicles. His earlier stints include Chief Data Scientist at http://cadenttech.tv/, Principal Architect/Data Scientist at Tata America Intl. Corp., Director of Data Science at a bioinformatics startup, and as a Distinguished Engineer at Cisco. He has been speaking at various conferences including ML tutorials at Strata SJC and London 2016, Spark Summit, Strata-Spark Camp, OSCON, PyCon, and PyData, writes about Robots Rules of Order, Big Data Analytics—Best of the Worst, predicting NFL, Spark, Data Science, Machine Learning, Social Media Analysis as well as has been a guest lecturer at the Naval Postgraduate School. His occasional blogs can be found at https://doubleclix.wordpress.com/. His other passion is flying drones (working towards Drone Pilot License (FAA UAS Pilot) and Lego Robotics—you will find him at the St.Louis FLL World Competition as Robots Design Judge.

MongoDB Cookbook: Second Edition eBook


KEY FEATURES

MongoDB is a high-performance, feature-rich, NoSQL database that forms the backbone of the systems that power many organizations. Packed with easy-to-use features that have become essential for a variety of software professionals, MongoDB is a vital technology to learn for any aspiring data scientist or systems engineer. This cookbook contains many solutions to the everyday challenges of MongoDB, as well as guidance on effective techniques to extend your skills and capabilities.

  • Access 274 pages of content 24/7
  • Initialize the server in three different modes w/ various configurations
  • Get introduced to programming language drivers in Java & Python
  • Learn advanced query operations, monitoring, & backup using MMS
  • Find recipes on cloud deployment, including how to work w/ Docker containers along MongoDB

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Amol Nayak is a MongoDB certified developer and has been working as a developer for over 8 years. He is currently employed with a leading financial data provider, working on cutting-edge technologies. He has used MongoDB as a database for various systems at his current and previous workplaces to support enormous data volumes. He is an open source enthusiast and supports it by contributing to open source frameworks and promoting them. He has made contributions to the Spring Integration project, and his contributions are the adapters for JPA, XQuery, MongoDB, Push notifications to mobile devices, and Amazon Web Services (AWS). He has also made some contributions to the Spring Data MongoDB project. Apart from technology, he is passionate about motor sports and is a race official at Buddh International Circuit, India, for various motor sports events. Earlier, he was the author of Instant MongoDB, Packt Publishing.

Cyrus Dasadia always liked tinkering with open source projects since 1996. He has been working as a Linux system administrator and part-time programmer for over a decade. He works at InMobi, where he loves designing tools and platforms. His love for MongoDB started in 2013, when he was amazed by its ease of use and stability. Since then, almost all of his projects are written with MongoDB as the primary backend. Cyrus is also the creator of an open source alert management system called CitoEngine. He likes spending his spare time trying to reverse engineer software, playing computer games, or increasing his silliness quotient by watching reruns of Monty Python.

Learning Apache Kafka: Second Edition eBook


KEY FEATURES

Apache Kafka is simple describe at a high level bust has an immense amount of technical detail when you dig deeper. This step-by-step, practical guide will help you take advantage of the power of Kafka to handle hundreds of megabytes of messages per second from multiple clients.

  • Access 120 pages of content 24/7
  • Set up Kafka clusters
  • Understand basic blocks like producer, broker, & consumer blocks
  • Explore additional settings & configuration changes to achieve more complex goals
  • Learn how Kafka is designed internally & what configurations make it most effective
  • Discover how Kafka works w/ other tools like Hadoop, Storm, & more

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Nishant Garg has over 14 years of software architecture and development experience in various technologies, such as Java Enterprise Edition, SOA, Spring, Hadoop, Hive, Flume, Sqoop, Oozie, Spark, Shark, YARN, Impala, Kafka, Storm, Solr/Lucene, NoSQL databases (such as HBase, Cassandra, and MongoDB), and MPP databases (such as GreenPlum).

He received his MS in software systems from the Birla Institute of Technology and Science, Pilani, India, and is currently working as a technical architect for the Big Data R&D Group with Impetus Infotech Pvt. Ltd. Previously, Nishant has enjoyed working with some of the most recognizable names in IT services and financial industries, employing full software life cycle methodologies such as Agile and SCRUM.

Nishant has also undertaken many speaking engagements on big data technologies and is also the author of HBase Essestials, Packt Publishing.

Apache Flume: Distributed Log Collection for Hadoop: Second Edition eBook


KEY FEATURES

Apache Flume is a distributed, reliable, and available service used to efficiently collect, aggregate, and move large amounts of log data. It's used to stream logs from application servers to HDFS for ad hoc analysis. This ebook start with an architectural overview of Flume and its logical components, and pulls everything together into a real-world, end-to-end use case encompassing simple and advanced features.

  • Access 178 pages of content 24/7
  • Explore channels, sinks, & sink processors
  • Learn about sources & channels
  • Construct a series of Flume agents to dynamically transport your stream data & logs from your systems into Hadoop

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Steve Hoffman has 32 years of experience in software development, ranging from embedded software development to the design and implementation of large-scale, service-oriented, object-oriented systems. For the last 5 years, he has focused on infrastructure as code, including automated Hadoop and HBase implementations and data ingestion using Apache Flume. Steve holds a BS in computer engineering from the University of Illinois at Urbana-Champaign and an MS in computer science from DePaul University. He is currently a senior principal engineer at Orbitz Worldwide (http://orbitz.com/).

          Learning MongoDB for $15   
A Comprehensive Guide to Using MongoDB for Fast, Fault Tolerant Management of Big Data
Expires May 18, 2022 23:59 PST
Buy now and get 80% off

KEY FEATURES

Businesses today have access to more data than ever before, and a key challenge is ensuring that data can be easily accessed and used efficiently. MongoDB makes it possible to store and process large sets of data in a ways that drive up business value. Learning MongoDB will give you the flexibility of unstructured storage, combined with robust querying and post processing functionality, making you an asset to enterprise Big Data needs.

  • Access 64 lectures & 40 hours of content 24/7
  • Master data management, queries, post processing, & essential enterprise redundancy requirements
  • Explore advanced data analysis using both MapReduce & the MongoDB aggregation framework
  • Delve into SSL security & programmatic access using various languages
  • Learn about MongoDB's built-in redundancy & scale features, replica sets, & sharding

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming, mobile streaming
  • Certification of completion included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: all levels

Compatibility

  • Internet required

THE EXPERT

Daniel Watrous is a 15-year veteran of designing web-enabled software. His focus on data store technologies spans relational databases, caching systems, and contemporary NoSQL stores. For the last six years, he has designed and deployed enterprise-scale MongoDB solutions in semiconductor manufacturing and information technology companies. He holds a degree in electrical engineering from the University of Utah, focusing on semiconductor physics and optoelectronics. He also completed an MBA from the Northwest Nazarene University. In his current position as senior cloud architect with Hewlett Packard, he focuses on highly scalable cloud-native software systems.

          Introduction to Hadoop for $49   
Get Familiar with One of the Top Big Data Frameworks In the World
Expires January 02, 2022 23:59 PST
Buy now and get 28% off

KEY FEATURES

Hadoop is one of the most commonly used Big Data frameworks, supporting the processing of large data sets in a distributed computing environment. This tool is becoming more and more essential to big business as the world becomes more data-driven. In this introduction, you'll cover the individual components of Hadoop in detail and get a higher level picture of how they interact with one another. It's an excellent first step towards mastering Big Data processes.

  • Access 30 lectures & 5 hours of content 24/7
  • Install Hadoop in Standalone, Pseudo-Distributed, & Fully Distributed mode
  • Set up a Hadoop cluster using Linux VMs
  • Build a cloud Hadoop cluster on AWS w/ Cloudera Manager
  • Understand HDFS, MapReduce, & YARN & their interactions

PRODUCT SPECS

Details & Requirements

  • Length of time users can access this course: lifetime
  • Access options: web streaming
  • Certification of completion not included
  • Redemption deadline: redeem your code within 30 days of purchase
  • Experience level required: beginner
  • IDE like IntelliJ or Eclipse required (free to download)

Compatibility

  • Internet required

THE EXPERT

Loonycorn is comprised of four individuals--Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh--who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.

          LV loading BIG DATA LONGSLEEVE - WHITE - 550,00kr   
100% Cotton Very futuristic
          LV loading BIG DATA LIGHTER - 75,00kr   
Re-fillable lighter 100% Fire A thing of beauty Light it up
          LV loading BIG DATA TOTE - 150,00kr   
100% Cotton Off white drawstrings Fits tons of stuff inside of it Shaolin monk got your back
          LV loading 9 STICKER PACK - 75,00kr   
BIG DATA STICKER LIL INFO STICKER ¯\_(ツ)_/¯ STICKER One pack contains 3 of each sticker Put them everywhere
          LV loading 18 STICKER PACK - LIL INFO & BIG DATA - 150,00kr   
Sticker pack with 9 of each sticker Together they are more powerful
          Daily Deal: The Ultimate Data Infrastructure Architect Bundle   

Businesses have access to mountains of data. Big data and how to properly manage it is a big deal. The $36 Ultimate Data Infrastructure Architect Bundle is designed to teach you how to manage it all to make it more useful. The bundle includes 5 courses covering ElasticSearch 5.0, Apache Spark 2, MongoDB, Hadoop 2, and Amazon Web Services (AWS). You also receive 5 e-books offering expanded training on the concepts introduced in the courses. You’ll get an ElasticSearch 5.x Cookbook, Fast Data Processing with Spark 2, a MongoDB Cookbook, Learning Apache Kafka and Apache Flume: Distributed Log Collection for Hadoop.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.



Permalink | Comments | Email This Story

          Americans still shopping in stores know they’re missing deals – and they don’t care   

shopping bags Kate Spade

Despite a looming retail apocalypse, nearly half of Americans of all ages still do most of their shopping in brick-and-mortars.

That's according to a poll from our partner, MSN, which also found that 17% of Americans shop mostly online, while 34% shop online and in stores equally.

MSN polls its readers, and then uses machine learning to model how a representative sample of the US would have responded, using big data, such as the Census. It's nearly as accurate as a traditional, scientific survey.

Interestingly, Americans aren't shopping in stores for low prices — only 3% say they get better deals there. Rather, they're trading savings for immediacy: 74% of people who prefer brick-and-mortars are doing it so they "can see and get items right away." Just 13% say getting help from a person is the biggest advantage of in-store shopping.

What's more, only 10% are heading to stores to avoid shipping costs. Even with premium membership services like Amazon Prime, which offers free two-day shipping and same-day delivery, it's clear that many Americans are prioritizing instant gratification — holding a product in hand and bringing it home moments later — over potential savings.

For those who prefer online shopping, 41% said it's more convenient and 24% said they find better deals.

 

 

 

But those shoppers who prefer an in-store experience may not be losing out on too much savings after all. While prices can be cheaper online, that's not always the case. According to an MIT study covered in the Boston Globe, online prices were lower than in-store prices 22% of the time. Ultimately, major US retailers "charge the same price for goods they sell online as compared with in stores about 70% of the time."

These findings come amid widespread retail closures in the US. Some of the nation's biggest department stores and shopping mall mainstays have shuttered hundreds of locations around the country to bolster online offerings, reports Business Insider's Hayley Peterson. Due to the rise of e-commerce, visits to shopping malls declined by 50% between 2010 and 2013, according to the real-estate research firm Cushman & Wakefield.

SEE ALSO: The 5 best apps to help you manage your money

DON'T MISS: Amazon is bringing back 'Prime Day' on July 11 — but deals start sooner

Join the conversation about this story »

NOW WATCH: Here's how LeBron James reacted when he learned Kevin Durant was joining the Warriors


          Data Scientist / Data Science / Data Analyst / Big Data   
Data Scientist / Data Science / Data Analyst / Big Data My Financial Client is looking to take on board a few Data Scientists for an upcoming project. The contract length is 6 months with possible extrnsion. They are looking for ...
          Univa Powers Formula 1 Racing and more at ISC 2017   

In this video from ISC 2017, Bill Bryce from Univa describes how the company's software powers simulation for everything from Formula 1 racing to manufacturing and life sciences. "Univa is the leading innovator in workload management and containerization solutions. Univa's suite includes the world's most trusted workload optimization solution enabling organizations to manage and optimize distributed applications, containers, data center services, legacy applications, and Big Data frameworks in a single, dynamically shared set of resources."

The post Univa Powers Formula 1 Racing and more at ISC 2017 appeared first on insideHPC.


          HV Jagadish contributes to Big Data magazine article on diversity   
HV Jagadish, a core MIDAS faculty member and Professor of Electrical Engineering and Computer Science, contributed as a co-author on an article on diversity in big data that appears in...
          futurejournalismproject: Introduction to Bullshit Via Carl...   


futurejournalismproject:

Introduction to Bullshit

Via Carl Bergstrom and Jevin West, two professors at the University of Washington, whose course, “Calling Bullshit in the Age of Big Data,” launched this Spring:

The world is awash in bullshit. Politicians are unconstrained by facts. Science is conducted by press release. Higher education rewards bullshit over analytic thought. Startup culture elevates bullshit to high art. Advertisers wink conspiratorially and invite us to join them in seeing through all the bullshit — and take advantage of our lowered guard to bombard us with bullshit of the second order. The majority of administrative activity, whether in private business or the public sphere, seems to be little more than a sophisticated exercise in the combinatorial reassembly of bullshit.

We’re sick of it. It’s time to do something, and as educators, one constructive thing we know how to do is to teach people. So, the aim of this course is to help students navigate the bullshit-rich modern environment by identifying bullshit, seeing through it, and combating it with effective analysis and argument.

Video lectures from the course are available on the class web site.

H/T: OpenCulture.


          Real World Data y Big Data, herramientas claves en la gestión sanitaria   
Big Data

El actual binomio entre las políticas de austeridad y la continua evolución de las innovaciones terapéuticas está provocando una necesidad de información real y actualizada del entorno sanitario que propicie la toma de decisiones en la gestión sanitaria. Ante esta situación, los decisores sanitarios demandan evidencias más sofisticadas e iniciativas que demuestren el ‘valor más […]

La entrada Real World Data y Big Data, herramientas claves en la gestión sanitaria aparece primero en Consejos de tu Farmacéutico.


          Xavier Mertens: SSTIC 2017 Wrap-Up Day #1   

I’m in Rennes, France to attend my very first edition of the SSTIC conference. SSTIC is an event organised in France, by and for French people. The acronym means “Symposium sur la sécurité des technologies de l’information et des communications“. The event has a good reputation about its content but is also known to have a very strong policy to sell tickets. Usually, all of them are sold in a few minutes, spread across 3 waves. I was lucky to get one this year. So, here is my wrap-up! This is already the fifteen edition with a new venue to host 600 security people. A live streaming is also available and a few hundred people are following talks remotely.

The first presentation was performed by  Octave Klaba who’s the CEO of the OVH operator. OVH is a key player on the Internet with many services. It is known via the BGP AS16276. Octave started with a complete overview of the backbone that he build from zero a few years ago. Today, it has a capacity of 11Tpbs and handles 2500 BGP sessions. It’s impressive how this CEO knows his “baby”. The next part of the talk was a deep description of their solution “VAC” deployed to handle DDoS attacks. For information, OVH is handler ~1200 attacks per day! They usually don’t communicate with them, except if some customers are affected (the case of Mirai was provided as an example by Octave). They chose the name “VAC” for “Vacuum Cleaner“. The goal is to clean the traffic as soon as possible before it enters the backbone. An interesting fact about anti-DDoS solutions: it is critical to detect them as soon as possible. Why? Let’s assume that your solution detects a DDoS within x seconds, attackers will launch attacks of less than x seconds. Evil! The “VAC” can be seen as a big proxy and is based on multiple components that can filter specific types of protocols/attacks. Interesting: to better handle some DDoS, the OVH teams reversed some gaming protocols to better understand how they work. Octave described in deep details how the solution has been implemented and is used today… for any customer! This is a free service! It was really crazy to get so many technical details from a… CEO! Respect!

The second talk was “L’administration en silo” by Aurélien Bordes and focused on some best practices for Windows services administration. Aurélien started with a fact: When you ask a company how is the infrastructure organised, they speak usually about users, data, computers, partners but… they don’t mention administrative accounts. From where and how are managed all the resources? Basically, they are three categories of assets. They can be classified based on colours or tiers.

  • Red: resources for admins
  • Yellow: core business
  • Green: computers

The most difficult layer to protect is… the yellow one. After some facts about the security of AD infrastructure,  Aurélien explained how to improve the Kerberos protocol. The solution is based on FAST, a framework to improve the Kerberos protocol. Another interesting tool developed by Aurélien: The Terminal Server Security Auditor. Interesting presentation but my conclusion is that in increase the complexity of Kerberos which is already not easy to master.

During the previous talk, Aurélien presented a slide with potential privilege escalation issues in an Active Directory environment. One of them was the WSUS server. It’s was the topic of the research presented by Romain Coltel and Yves Le Provost. During a pentest engagement, they compromised a network “A” but they also discovered a network “B” completely disconnected from “A”. Completely? Not really, there were WSUS servers communicating between them. After a quick recap of the WSUS server and its features, they explained how they compromised the second network “B” via the WSUS server. Such a server is based on three major components:

  • A Windows service to sync
  • A web service web to talk to clients (configs & push packages)
  • A big database

This database is complex and contains all the data related to patches and systems. Attacking a WSUS server is not new. In 2015, there was a presentation at BlackHat which demonstrated how to perform a man-in-the-middle attack against a WSUS server. But today, Romain and Yves used another approach. They wrote a tool to directly inject fake updates in the database. The important step is to use the stored procedures to not break the database integrity. Note that the tool has a “social engineering” approach and fake info about the malicious patch can be injected too to entice the admin to allow the installation of the fake patch on the target system(s). To be deployed, the “patch” must be a binary signed by Microsoft. Good news, plenty of tools are signed and can be used to perform malicious tasks. They use the tool psexec for the demo:

psexec -> cmd.exe -> net user /add

The DB being synced between different WSUS servers, it was possible to compromise the network “B”. The tool they developed to inject data into the WSUS database is called WUSpendu. A good recommendation is to put WSUS servers in the “red” zone (see above) and to consider them as critical assets. Very interesting presentation!

After two presentations focusing on the Windows world, back to the UNIX world and more precisely Linux with the init system called systemd. Since it was implemented in major Linux distribution, systemd has been the centre of huge debates between the pro-initd and pro-systemd. Same for me, I found it not easy to use, it introduces complexity, etc… But the presentation gave nice tips that could be used to improve the security of daemons started via systemd. A first and basic tip is to not use the root account but many new features are really interesting:

  • seccomp-bpf can be used to disable access to certain syscalls (like chroot() or obsolete syscalls)
  • capacities can be disabled (ex: CAP_NET_BIND_SERVICE)
  • name spaces mount (ex: /etc/secrets is not visible by the service)

Nice quick tips that can be easily implemented!

The next talk was about Landlock by Michael Salaün. The idea is to build a sandbox with unprivileged access rights and to run your application in this restricted space. The perfect example that was used by Michael is a multi-media player. This kind of application includes many parsers and is, therefore, a good candidate to attacks or bugs. The recommended solution is, as always, to write good (read: safe) code and the sandbox must be seen as an extra security control. Michael explained how the sandbox is working and how to implement it. The example with the media player was to allow it to disable write access to the filesystem except if the file is a pipe.

After the lunch, a set of talks was scheduled around the same topic: analysis of code. If started with “Static Analysis and Run-time Assertion checking” by Dillon Pariente, Julien Signoles. The presented Frama-C a framework of C code analysis.

Then Philippe Biondi, Raphaël Rigo, Sarah Zennou, Xavier Mehrenberger presented BinCAT (“Binary Code Analysis Tool”). It can analyse binaries (x86 only) but will never execute code. Just by checking the memory, the register and much other stuff, it can deduce a program behaviour. BinCAT is integrated into IDA. They performed a nice demo of a keygen tool. BinCAT is available here and can also be executed in a Docker container. The last talk in this set was “Désobfuscation binaire: Reconstruction de fonctions virtualisées” by Jonathan Salwan, Marie-Laure Potet, Sébastien Bardin. The principle of the binary protection is to make a binary more difficult to analyse/decode but without changing the original capabilities. This is not the same as a packer. Here there is some kind of virtualization that emulates proprietary bytecode. Those three presentations represented a huge amount of work but were too specific for me.

Then, Geoffroy CoupriePierre Chifflier presented “Writing parsers like it is 2017“. Writing parsers is hard. Just don’t try to write your own parser, you’ll probably fail. But parsers are available in many applications. They are hard to maintain (old code, handwritten, hard to test & refactor). Issues based on parsers can have huge security impacts, just remember the Cloudbleed bleed bug! The proposed solution is to replace classic parsers by something stronger. The criteria’s are: must be memory safe, called by / can call C code and, if possible, no garbage collection process. RUST is a language made to develop parsers like nom. To test it, it has been used in projects like the VLC player and the Suricata IDS. Suricata was a good candidate with many challenges: safety, performance. The candidate protocol was TLS. About VLC and parser, the recent vulnerability affecting the subtitles parser is a perfect example why parsers are critical.

The last talk of the day was about caradoc. Developed by the ANSSI (French agency), it’s a toolbox able to decode PDF files. The goal is not to extract and analyse potentially malicious streams from PDF files. Like the previous talk, the main idea was to avoid parsing issues. After reviewing the basics of the PDF file format, Guillaume Endignoux, Olivier Levillain made two demos. The first one was to open the same PDF file within two readers (Acrobat and Fox-It). The displayed content was not the same. This could be used in phishing campaigns or to defeat the analyst. The second demo was a malicious PDF file that crashed Fox-It but not Adobe (DDoS). Nice tool.

The day ended with a “rump” session (also called lighting talks by other conferences). I’m really happy with the content of the first day. Stay tuned for more details tomorrow! If you want to follow live talks, the streaming is available here.

[The post SSTIC 2017 Wrap-Up Day #1 has been first published on /dev/random]


          What is Your Best Approach to Decision Making?   

Thanks to computer technology, software and apps, more and more companies rely on big data to drive their business models. Leaders develop strategies using information compiled and analyzed by computers. Despite all of these advances, there still needs to be a human element behind decision-making in corporations. Experts touted by Harvard Business Review detail the best approach when figuring out how to move forward with a particular strategy.

Elements That Come Together

Three main elements come together when decision-making happens with all of these technology tools in the workplace. First, computers must compile the raw information. Sales figures, revenue, time to market, overhead, supply costs and labor expenses are all raw figures that computers work well with since those machines are great at doing math. Second, a program must analyze the numbers and organize them into usable information. This is when analytics software pays for itself.

The third aspect is that a human must employ decision-making to know what to do with the data. The program created a beautiful chart showing how price affects revenue over time, but what do company leaders do with that information? Business leaders must understand how the software compiles the information and what the numbers mean. People who can't figure out how to handle the data suffer from "big data, little brain," according to HBR.

How to Mitigate the Problem

Experts believe the best approach to alleviate any problems that come from relying on data too much is that business leaders should go with their gut and experience every once in a while. Finding this balance doesn't mean eschewing information altogether, but it's more about knowing what data to pay attention to when decision-making comes into play.

Ironically, there is a way to figure this out using a different kind of computer program. An agent-based simulation, or ABS, examines the behavior of humans, crunches the numbers and then makes a predictive model of what may happen. The U.S. Navy developed ABS in the 1970s when it examined the everyday behavior of 30,000 sailors. This type of program is gaining more widespread use due to better computing power.

There is a ton of information that computers must take into account to make these predictive models. When ABS first started, simulations ran for a year before getting results. In 2017, that timeframe is down to one day by comparison.

ABS uses information from the analytics software and applies it to customer behavior models and decision-making to predict what may happen in the future. This type of program answers what happens when a company changes prices of products, if a competitor adapts to market forces in a certain way and what customers may do when you change a product.

ABS can't predict everything, but it does take into account human expertise. ABS, like any analytics software, is only as good as the data it collects. It makes decisions more transparent because it supports the notion that if a company moves in one direction, then a certain scenario is likely to happen. You must remember to take a risk no matter what path you're on.

Decision-making shouldn't be all data or all going with your guts. However, data gathering certainly makes it easier thanks to the technological tools available to businesses.


Photo courtesy of Stuart Miles at FreeDigitalPhotos.net


          Hear It from a Girl Scout: I Would Rather Be the Researcher Than the Enthusiast   
Since 2013, in collaboration with Arconic Foundation, Girl Scouts of the USA has awarded ten Girl Scouts the Chuck McLane Scholarship, which is available to Gold Award recipients who complete projects related to science, technology, engineering, or math (STEM). Earlier this month, we announced that Ashley Martin of Girl Scouts of North-Central Alabama was a recipient of the Chuck McLane Scholarship for 2017. She will attend the University of Georgia as a Bernard Ramsey Scholar majoring in genetics. Check out her story and what she has to say about her experience at Girl Scouts.

Tell us about your STEM-related Gold Award project and your experience at Girl Scouts.

My project is a curriculum that gives a mostly unbiased overview of genetic engineering and genetically modified foods and encourages people to develop their own opinions. The curriculum is a three-day unit study, including materials to cover the scientific, economic, social, and environmental impacts of GMO crops. It also provides an overview of basic genetics to facilitate understanding of more advanced material. The target audience is a high school–level biology class, advanced middle school students, and homeschoolers. Before making the materials available freely on the Internet, I conducted a small pilot class of 14 people and administered pre- and post-course quizzes to measure the program’s impact. The pilot class went well, with a 43 percent increase between an initial background quiz and the final quiz. This curriculum is available at www.BetterLesson.com and www.TeachersPayTeachers.com.

For me, this was a perfect capstone to an incredible experience in Girl Scouts. I started ten years ago, joining a Brownie troop in Pittsburgh before moving to my new home in Huntsville, Alabama. Girl Scouts has given me opportunities to volunteer and try things I would never be able to do on my own, like spending the night in a science museum. My favorite part was making signs and selling cookies every year. As I got older, I started taking on more of a leadership and mentor role, helping my little sister’s troop whenever I could. Being a Girl Scout taught me a lot about the type of person I want to be.


What advice would you give to other girls who are in the process of earning their Gold Award?

Don’t give up! I will confess that at various times my Gold Award project felt overwhelming or impossible. But rather than focus on the whole thing, I learned to break it up into smaller pieces, and then lay out a plan for completing each piece. After that, it was much easier to just focus on and follow the plan. I didn’t need to have a solution for everything at the same time; I just had to tackle the next step in front of me.


How do you take the lead?

The Girl Scouts, my Journeys, and the Gold Award have been great opportunities to take on a leadership role in the more traditional sense. While those were amazing experiences that taught me a lot about myself and the type of person I want to be, I also think leadership doesn’t just happen in those big moments. I also work hard to lead by example in my daily life, often in simple ways. I treat others with respect and compassion. I try to be a good listener, and a good friend, for everyone I know. When someone needs help, I’m the first to jump in and the last to leave. I think this type of leadership on a small scale is just as important as the more traditional examples.


Was there a particular event or moment in your childhood that sparked your interest in STEM?

Ever since I read about an experiment where scientists created cats that glow in the dark, I have been fascinated by biology research. I loved to learn about advancements in genetic engineering and how it has the potential to solve problems affecting both food and medicine. In ninth grade, I realized that I would rather be the researcher making these advancements than the enthusiast reading about them. Ever since then, it has been my goal to be a scientist.


What does the Chuck McLane Scholarship mean to you?

I was incredibly honored to learn I had won the scholarship. Before I applied, I looked over the online descriptions of the previous winners and was blown away by their passion for STEM and the scope of their Gold Award projects. To be considered an equal to these amazing and talented women is humbling. I only hope that I can inspire someone the way that they inspired me!


Where are you going to college, and what STEM studies are you interested in focusing on?

I will be attending the University of Georgia in the fall, majoring in genetics. I have already completed a lot of the freshman biology and chemistry courses through dual enrollment in high school, so I’m excited to jump right in to higher level biology and genetics classes. Ideally, I also want to start conducting research as a freshman, so that I can augment classroom learning with practical experience in a working laboratory setting. I hope to minor in computer science or bioinformatics, as genetics serves as an incredibly complex “big data” problem that will require novel computational analysis methodologies. This will give me the most solid foundation possible for future education and careers.

In the long term, I intend to earn my PhD in genetics and become a researcher in academia or private industry. I would love to work at a cutting-edge biotechnology start-up that uses genetic engineering to improve medicine, agriculture, or pharmaceuticals.


Do you have any female heroes in STEM?

I have always admired the work and life of Marie Curie. She did groundbreaking work in chemistry and physics related to radioactivity during an age when higher education for women was very rare. That work led to a Nobel Prize (twice!) and the discovery of two atomic elements. She then took that science and developed mobile X-ray units used to treat soldiers on the front in World War I. That, to me, is the ultimate goal for any scientist: to not only advance human understanding, but also have a positive impact on society.


What advice would you give to other girls who want to pursue STEM careers?


Just do it! I’m lucky to have had a lot of supportive people in my life who believed in me and taught me I could do anything I put my mind to. But even if you don’t have people in your corner, believe in yourself. I always struggled with math, but made it through college-level Calculus I and II in high school with a lot of blood, sweat, and tears. It was hard work, but incredibly rewarding once I realized I really could do it. Now I know the sky is the limit.


From 2013 through 2017, the Arconic Chuck McLane Scholarship Program has provided a $10,000 scholarship to two girls a year. Learn more about these young women and the other Arconic Chuck McLane Scholarship recipients.

          Festival Elektra: quand la science-fiction devient réalité   
Le festival Elektra anime la ville depuis hier avec ses expositions et ses installations d'art numérique. Le spectacle participatif Inferno, présenté pour la deuxième année, est le clou de cette 18e édition dont la thématique est The Big Data Spectacle. Immersion dans ce projet hors du commun à l'Usine C.
          FTSearch too slow   

Hello, i want to ask, if somebody could help me out of this problem.

We have quite big database, ca 10 GB (most of them are attachments). The FT index has 2 GB, quite big as well.

Is it normal, that the ftsearch query needs ca 1-2 minutes for each to find the result?

Sometime in 30s, or 9s, but the 5 minutes searching time is quite too big.

Im using an agent and view.FTSearch ( in 38 000 documents ).

Is the time ok? How to improve the search speed?

 

Thanx

 


          India-Based Infosys Plans to Hire Thousands of U.S. Workers for New U.S. Locations   
Amid criticism of outsourcing firms, at least one large Indian outsourcing company is planning to hire 10,000 U.S. workers over the next two years. Infosys CEO Vishal Sikka announced the company will open four technology and innovation hubs in the U.S. “focusing on cutting-edge technology areas, including artificial intelligence, machine learning, user experience, emerging digital technologies, cloud, and big data.” The first will be in Vice President Mike Pence’s home state of Indiana. The Indiana campus also will have a training facility and a “skilling and re-skilling” facility. Scheduled to open in August, the Indiana campus alone is expected to employ at least 2,000 U.S. workers by 2021. Indiana reportedly offered Infosys incentives that include $500,000 in training funds and $15,250 in conditional tax credits per job. Sikka has stated that there is “a strong desire by [President Donald Trump] and [the] administration to hire more…
          6/30/2017: ECONOMY: An investment in our future   

Using big data to save big money — and hopefully, do some people some good too — is the aim of the Government’s social investment strategy. The fiscal dimension is too easily dismissed. Welfare spending is one of the four big-ticket items in the...
          Could Your Product Be Too Early? How To Position It the Right Way   

Shutterstock

“Data is the new oil!”

If you are in big data like myself, I am sure you have heard of this one before. As the Chief Sales Officer of Cinema Intelligence, a big data company for movie exhibitors, I have had the good fortune of bringing in an innovative new software.

Now, here’s the dilemma: In the last few years, while there have been endless publications on how movie studios are extracting data from platforms such as Twitter, Facebook and Google, there have been very few publications on how movie theaters can make data-driven decisions to boost box office numbers.

While many small businesses and startups may see this as a red flag and blame poor “product market fit” if they had a lack of success in sales, we’ve found ways to succeed in the industry by positioning our product the right way. Below are a couple pointers I can share with you that may help you dramatically with your company’s growth:

Understand The Big Trends

You now have a groundbreaking product. However, before you jump on that first phone call or email your nicely tuned sales pitch, the first thing you must study is the general industry trend.

For example, in the U.S., the relationship with studios and theaters deeply impact a theater’s capacity to bring in new technology. This starts with the unusual power balance between studios and theaters. Unlike theaters in other countries, theaters in the U.S. do not offer as much diversity when it comes to content because of the limited number of studios they work with.

Furthermore, relative to foreign markets, the U.S. theater market is much more competitive, having dozens of mid-sized theater chains across the country. The six or seven major studios produce the bulk of the films, and there are tons of theaters to present the films, giving studios the edge when negotiating product. This contrasts starkly with a market such as India, where there is three to five times more content in any given year.

In such markets, the leverage in negotiation power theaters have allows them to have far more flexibility when trying new business ideas to push their products, which in turn allows them to try new software. Understanding this big picture helped us create a blueprint for success. When Cinema Intelligence first launched in 2011, our focus was to expand in markets that were much more receptive to new ideas and flexible to change. We were country-agnostic and expanded to markets in Africa, Asia and Europe. Additionally, it didn’t hurt that many film industries outside the U.S. have a relatively short history and theaters are founded by young entrepreneurs who are interested in leveraging the latest technology. Launching in 2011 was important for us, since we needed to find early adopters and clients who championed our product. It wasn’t until this year we finally decided to tackle the U.S. markets. For your product to achieve success, it is important to understand the big picture and find the markets that are willing to become the early adopters.

Understand Your Customer’s Needs

Oftentimes when presenting a new technology, customers may not know the value of your product. Furthermore, today’s products and services are far more complex than yesterday’s, and selling a product such as big data will require a commitment on your end to educate the customer. If a customer fails to understand your value proposition, this is on you and your sales team.

As you engage with your customer, you should be able to answer the following questions:

  • What is the decision-making process like?
  • Is the decision-making power centralized or decentralized?
  • How great is their aversion to change?
  • Who are the stakeholders that will be using the product?
  • What does each department need?

One exercise that helped us during the sales process was setting up a discovery call. The purpose of the discovery call is to uncover the exact business processes of what decisions a theater makes throughout the entire lifetime of a movie. Once we understand their business processes from A to Z, we explain why the changes to their business add value. By pitching a solutions-driven presentation that explains the “why” instead of a product-driven presentation that explains the “what,” we are able to educate our customer. Additionally, oftentimes we do separate demos and presentations for each department. This is important because what could excite the marketing team can drastically different from the finance team. It’s all about dividing and conquering.

They say that you need to know your client’s business better than they know it themselves. While this is much easier said than done and can be quite challenging, you should always strive to do so. Lastly, if you’re a new business entering an industry with a new idea and haven’t quite proven your idea yet, feedback is essential and customization of your product will probably be expected. While you may see this as a cost, it is important to understand this as an opportunity. Customers will see it as a commitment to meet the expectations of your product performance. Therefore, invest in the relationship. If you’re able to establish yourself as a market leader in the industry after you capitalize on the first-mover advantage, your payoff can be immense.


          Big Data Developer - Verizon - Burlington, MA   
Beyond powering America’s fastest and most reliable network, we’re leading the way in broadband, cloud and security solutions, Internet of Things and innovating...
From Verizon - Thu, 29 Jun 2017 10:58:16 GMT - View all Burlington, MA jobs
          Big Data Developer - Verizon - Burlington, MA   
Beyond powering America’s fastest and most reliable network, we’re leading the way in broadband, cloud and security solutions, Internet of Things and innovating...
From Verizon - Thu, 29 Jun 2017 10:58:16 GMT - View all Burlington, MA jobs
          Xcalar, which wants to use big data to provide business insights, announces $16M Series A led by Khosla Ventures; firm has partnerships with Microsoft, Google (Cat Zakrzewski/Wall Street Journal)   

Cat Zakrzewski / Wall Street Journal:
Xcalar, which wants to use big data to provide business insights, announces $16M Series A led by Khosla Ventures; firm has partnerships with Microsoft, Google  —  The round will fuel the company as it takes on Palantir in big data for business analytics.


          Robert Bosch GmbH: Systementwickler/in Big Data   
Definition von Anforderungen für die funktionale Datenerfassung von Bremssystemen
          Vpon Releases the Latest Hong Kong Mobile Advertising Data Report   
...-- HONG KONG, CHINA--(Marketwired - Jun 29, 2017) - Vpon Big Data Group releases the Hong Kong Mobile Advertising Statistics and Trends report, revealing the latest landscape of Hong Kong mobile advertising. The report indicates that a... [article continues]
          Vpon Releases the Latest Hong Kong Mobile Advertising Data Report   
HONG KONG, CHINA--(Marketwired - Jun 29, 2017) - Vpon Big Data Group releases the Hong Kong Mobile Advertising Statistics and Trends report, revealing the latest landscape of Hong Kong mobile advertising. The report indicates that an upward trend in ...
          Big Data and Hadoop Analytics Certification Bundle (92% discount)   
The business world runs on data these days, and it’s the people who know how to make data-driven business decisions who are getting ahead. You can become one of those people after this 3-course bundle diving into the science and best business practices of working with mass amounts of data and real-time analytics. You’ll explore…
          Big Data Solution Architect - GlaxoSmithKline - Upper Providence, PA   
• Experience designing and building complex, high performance platforms to support Big Data ingestion, curation and analytics. • Thorough understanding of the
From GlaxoSmithKline - Sun, 11 Jun 2017 06:59:54 GMT - View all Upper Providence, PA jobs
          DevOps Engineer - GlaxoSmithKline - Upper Providence, PA   
The individual will work closely with developers, architects, and end users to implement capabilities based on product roadmap. Big Data Experience:....
From GlaxoSmithKline - Fri, 19 May 2017 18:39:00 GMT - View all Upper Providence, PA jobs
          Quart de Poblet mejora los servicios del polígono industrial con la sectorización de la red de tuberías de agua   
El Ayuntamiento de Quart y Global Omnium/Aguas de Valencia han concluido la sectorización de las tuberías del polígono industrial cuya longitud supera los 35 kilómetros, lo que permitirá mejorar la gestión de los recursos hídricos de éste para facilitar su desarrollo industrial y comercial. A lo largo de los 11 meses que han durado las obras, los técnicos han instalado 1,2 kilómetros de tuberías, 12 contadores para la medición de los caudales, así como 52 válvulas para dividir la red hidráulica del polígono en 10 sectores más pequeños, con objeto de controlar y optimizar el volumen de agua en cada uno de ellos. En este sentido, es preciso determinar que el agua distribuida anualmente a todas las empresas instaladas allí totaliza, 1.204 millones de litros de agua, equivalentes al volumen almacenado en 482 piscinas olímpicas y al consumo total del municipio durante 172 días del año. La sectorización de una red de distribución de agua ha sido la técnica empleada por el Ayuntamiento y Global Omnium/Aguas de Valencia para el control de consumo de los recursos hídricos en el polígono industrial de Quart. Lo que se pretende conseguir con ella, es la división de la red en diversas áreas o sectores propiamente dichos, donde cada sector lleva incorporado uno o varios contadores y registradores para la medición de los consumos de cada sector, lo que permite gestionar posteriormente los datos. Para definir y delimitar los distintos sectores predefinidos, ha sido necesario el aislamiento de aquellas tuberías que conectan sectores adyacentes, mediante el correspondiente cierre de las válvulas de sectorización instaladas en la misma. En los últimos años, la colaboración entre el Ayuntamiento de Quart de Poblet y la empresa concesionaria del servicio, Global Omnium/Aguas de Valencia, ha logrado importantes hitos que sitúan a este municipio como referencia en la gestión eficiente y global de los recursos hídricos. Un dato que refrenda lo anterior son los casi 105.000 m3 de agua y 205.698,70 kwh de energía ahorrados en el último año, gracias a las casi 3.000 actuaciones técnicas llevadas a cabo en todo el municipio con objeto de garantizar la eficiencia en la gestión de su agua. Estos ahorros equivalen el abastecimiento de la ciudad durante 15 días, lo que demuestra la involucración del consistorio municipal en la consecución de sus compromisos con el desarrollo sostenible. Por otra parte, la implantación de soluciones tecnológicas de vanguardia, como es el caso de la telelectura de contadores inteligentes del agua, es otro de los importantes proyectos acometidos y que ayudan a posicionar internacionalmente la gestión llevada a cabo por el Ayuntamiento de Quart de Poblet. Actualmente, el 95% del total operativo, unas 10.500 unidades, emplea esta innovadora tecnología para incrementar la eficiencia en la gestión de los recursos hídricos y energéticos de la ciudad. Ésta permite anticipar, gracias al big data y la gestión de la información, la detección de fugas ocultas, la reducción del tiempo de las reparaciones, minimizar fraudes… En definitiva, es una innovación pionera en España y el resto del continente que demuestra el compromiso del Ayuntamiento por mejorar la calidad del servicio además de ponerla a disposición de los ciudadanos para que consuman de forma responsable. La gestión de la información aplicada al conocimiento de la gestión de los recursos hídricos, permite al Ayuntamiento y a la empresa mejorar todos sus procesos y ratios en beneficio de la calidad de vida de los ciudadanos de Quart, aunando los valores medioambientales y sociales.
          How to create a Smart City Strategy and its Implementation?    

Smart city strategy and implementation evolves with technological advancements achieved over rapidly developing global spectrum of research and developments in the field of Machine 2 Machine (M2M), Internet of Things (IoT) and moving onto Internet of Everything (IoE).

Through various online resources available it is a clear indication of the direction which a technology company needs to take if it wants to be in the league of the highly lucrative IoE domain, which considering economies of scale, is a prime index to determine the return on investments.  End of the day, the initiatives undertaken by pioneers in the IoE spectrum is to ensure that they benefit commercially at a global scale, gaining with the first mover’s advantage. As it is in the case of many internationally acclaimed corporates, IoE is the next phase they foresee for their rapid growth and diversification of business with a global footprint. 

Companies must embrace latest and evolving technologies accessible in order to remain competitive in the coming years. Enhanced hybrid internet connectivity, redundant and high available big data solutions, robust platforms run on cloud computing and integrating to the social media with a wide reach may be identified as major forces to ensure competing capabilities.

As per statistics published by Gartner; in comparison with year 2009, it is estimated that the total IoT units devices globally will reach 26 billion by the year 2020.  This directly accounts for an exponential surge in the smart devices and would account to the need for high available and reliable infrastructure and bandwidth to handle the IoT growth.

Key stakeholder in smart city strategy formulation and implementation would be the end user and is a focus while designing smart city platforms to ensure appealing user interfaces which provide complex data structures analyzed through intelligent analytic tools and portrayed onto dashboards which would be easy to use and interpret. Collation of all sub-systems in a city-level deployment of smart city ecosystem is an evolution of large scale Information Communication Technology (ICT) solutions catering various needs and functionalities that demand enhancement in efficiency and sustainable resource utilization yet available readily to use by each and every resident of the city.
Happiness index factor is a key measure and target of development of policy makers who undertake strategic decisions of a sustainable city level development initiative. This is highlighted in the World Happiness Report 2013. 

On March 05th 2014, Dubai formulated a strong far-sighted strategy with tangible milestones to transform itself into a “Smart City”. Considering that there are more than 180 nationalities living in Dubai, the transformation to adopt latest technologies directly carry huge impact in enhancing the life standards of the people residing in Dubai. The role of smart city initiative and its related developments keeping in view the Expo 2020 is vital to future developments in the region.  Introduction of ministries for happiness, tolerance and future is a clear and indicative direction to ensure that the milestones and vision of Dubai is met. A three-year timeline was set by the governance to ensure that Dubai Smart City project will be in track and following which Dubai will be the world’s smartest city by the year 2017.

Pacific Controls has pioneered the artificial intelligence framework developed for virtualization of managed services and for the delivery of real time business intelligence with its “Galaxy 2021” the Internet of Things platform. This ground breaking technology from Pacific Controls offers the world the opportunity to leverage the ubiquitous platform, Galaxy 2021 for IoT infrastructure and Smart City management applications. Pacific Controls’ Galaxy2021 is the world’s first enterprise platform delivering city centric services for management of its ecosystem comprising of Agriculture, Airports and Aviation, Education, Healthcare, Government, Energy, Financial Services, Hospitality, Manufacturing, Ground Transport, Logistics, Marine Ports, Oil and Gas and Residential.


Visit for more details : http://pacificcontrols.net/





          Digital Transformation Berater munkakörbe keresünk munkatársat. | Feladatok: Was dich erwartet...   
Digital Transformation Berater munkakörbe keresünk munkatársat. | Feladatok: Was dich erwartet Begleitung unserer Klienten durch die digitale Transformation mit einer einzigartigen Kombination von Wissen an der Schnittstelle von Business und Technologie Beratung unserer Klienten durch den Einsatz moderner Informations-und Kommunikationstechnologien Entwicklung von Strategieoptionen, Handlungsempfehlungen und Unterstützung unserer Klienten bei der Umsetzung von IT-Strategien, digitalen Geschäftsmodellen und Unternehmensarchitekturen auf jeder Managementebene Beratungsschwerpunkte können dabei sein: Big Data Analytics Strategieentwicklung und Umsetzung, Data-Science und Datenvisualisierung vom Proof-of-Concept bis in die Produktion Auf-und Ausbau von Internet-of-Things-Plattformen, Robotics und Digitalisierung von Prozessen, Data Center Management und Network Infrastructure IT-Security, IT-Governance, IT-Service-Management, IT-Prozess-Management, IT-Architektur Management, IT-Projekt-, Programm-und Portfolio-Management . | Mit ajánlunk: Was wir dir bieten Arbeiten in hochmotivierten, dynamischen und interdisziplinären Teams mit flachen Hierarchien Ein flexibles, modernes und wertschätzendes Arbeitsumfeld Exzellente Trainings & Weiterbildungsmöglichkeiten | Elvárások: Was wir uns wünschen Master-, Diplom-oder Promotionsstudium in Wirtschafts- Informatik, Wirtschafts- Ingenieurwesen, Wirtschaftswissenschaften mit IT Schwerpunkt oder Naturwissenschaften mit wirtschaftsrelevanten Kenntnissen Hohes Interesse an Digitalisierungs-und Technologietrends, wie z.B. IoT, Big Data, Cloud Management, EAM, Virtualisierung und Mobility Sehr gute analytische und konzeptionelle Fähigkeiten Sehr gute Kommunikation Fähigkeiten und Networking Hohe projektbezogene Reisebereitschaft je nach Präferenz national oder international Fließende Deutsch-und Englischkenntnisse Begeisterung von dem Zusammenwirken von Technologie & Wirtschaft | További infó és jelentkezés itt: www.profession.hu/allas/1033186
          Data scientist munkakörbe keresünk munkatársat. | Feladatok: Interact with customers to underst...   
Data scientist munkakörbe keresünk munkatársat. | Feladatok: Interact with customers to understand their requirements and identify emerging opportunities. • Take part in high and detailed level solution design to propose solutions and translating them into functional and technical specifications. • Convert large volumes of structured and unstructured data using advanced analytical solutions into actionable insights and business value. • Work independently and provide guidance to less experienced colleagues/employees. • Participate in projects, closely work and collaborate effectively with onsite and offsite teams at different worldwide locations in Hungary/China/US while delivering and implementing solutions. • Continuously follow data scientist trends and related technology evolutions in order to develop knowledge base within team.. | Mit ajánlunk: To be a member of dynamically growing site and enthusiastic team. • Professional challenges and opportunities to work with prestigious multinational companies. • Competitive salary and further career opportunities. | Elvárások: Bachelor?s/Master?s Degree in Computer Science, Math, Applied Statistics or a related field. • At least 3 years of experience in modeling, segmentation, statistical analysis. • Demonstrated experience in Data Mining, Machine Learning, additionally Deep Learning Tensorflow or Natural Language Processing is an advantage. • Strong programming skills using Python, R, SQL and experience in algorithms. • Experience working on big data and related tools Hadoop, Spark • Open to improve his/her skills, competencies and learn new techniques and methodologies. • Strong analytical and problem solving skills to identify and resolve issues proactively • Ability to work and cooperate onsite and offsite teams located in different countries Hungary, China, US and time zones. • Strong verbal and written English communication skills • Ability to handle strict deadlines and multiple tasks. | További infó és jelentkezés itt: www.profession.hu/allas/1033284
          30062017 leveranciersbijeenkomst big data   

Leveranciersbijeenkomst
          DevOps Engineer - GlaxoSmithKline - Upper Providence, PA   
Install and maintain all relevant OS components and utilities. Install and maintain MongoDB components and utilities within the platform. Big Data Experience:....
From GlaxoSmithKline - Fri, 19 May 2017 18:39:00 GMT - View all Upper Providence, PA jobs
          Red Hat Solutions Architect - MOBIA Technology Innovations - Canada   
Telecom Infrastructure, Converged Infrastructure, Cloud, Security and Big Data. Red Hat Solutions Architect....
From MOBIA Technology Innovations - Mon, 26 Jun 2017 13:15:01 GMT - View all Canada jobs
          QA Lead   
CA-Irvine, QA Lead Irvine, CA Full time permanent SEW www.smartenergywater.com Summary SEW is the # 1 Energy and Water Cloud Platform, providing cloud-based Software-as-a-Service (SaaS) solutions for Customer Engagement, Workforce Mobility, and Big Data Intelligence and Analytics to the Energy and Utility sector. We believe that Utility business model will continue to evolve with the focus on Customer Engage
          Plant Technician   
WA-Ritzville, CenturyLink (NYSE: CTL) is a global communications and IT services company focused on connecting its customers to the power of the digital world. CenturyLink offers network and data systems management, big data analytics, managed security services, hosting, cloud, and IT consulting services. The company provides broadband, voice, video, advanced data and managed network services over a robust 265,
          Praktikant/Werkstudent (w/m) Data Analyst - PwC - Hannover   
Du würdest dich gern mit Business Intelligence und/oder Big Data Technologien auseinandersetzen und mehr über das Zusammenspiel zwischen IT und Wirtschaft...
Gefunden bei PwC - Sun, 26 Mar 2017 04:17:11 GMT - Zeige alle Hannover Jobs
          Administrador/a de Sistemas Linux - Between Technology - Barcelona   
En BETWEEN seleccionamos y apostamos por el mejor talento dentro del sector tecnológico. Nos involucramos en una gran variedad de proyectos punteros, trabajando con las últimas tecnologías. Actualmente en BETWEEN contamos con un equipo de más de 350 personas. En el área de Desarrollo, abarcamos proyectos web y mobile, trabajamos en ámbitos como BI, IoT, Big Data e I+D. En el área de Operaciones implantamos proyectos de Service Desk, Infraestructuras IT y proyectos Cloud, entre otros. ...
          Saving American Football by JoJoFootball   
I need some help with my business. (Budget: min $50000 USD, Jobs: Analytics, Big Data, Business Plans, Gamification, Microsoft Hololens)
          Precision Medicine and Biomarkers Leaders Summit   

The 4th Global Precision Medicine & Biomarkers Leaders Summit Examining ground breaking Biomarker, Companion Diagnostic, Immuno-Oncology, Genomic & Big Data and AI research to facilitate the development of impactful personalized treatments for patients.This expanding international event attracts the leading authorities worldwide working in companion diagnostics, big data, genomics, biomarkers, immuno-oncology and other facets of precision medicine. … Continue reading Precision Medicine and Biomarkers Leaders Summit

The post Precision Medicine and Biomarkers Leaders Summit appeared first on Almac.


          Offuscamento   

Per Offuscamento s’intende l’aggiunta deliberata di informazioni ambigue, confuse e ingannevoli atte a interferire con la costante raccolta di dati personali da parte di autorità, imprese, inserzionisti, hacker e quant’altri. Scenari che, pur se specifici all’universo online, vanno ben oltre l’ambito digitale.

A farne le spese sono privacy e dissenso, libertà d’espressione e di movimento. Proprio nell’era della (cyber)sorveglianza diffusa e dell’avvento indiscriminato dei Big Data, ciascuno di noi deve poter ricorrere a strumenti di autodifesa per rettificare queste “asimmetrie dell’informazione” e quantomeno ridurre il potere del controllo esterno.

Senza proporre inutili barricate e mutuando da vari esempi sul campo (dalla Seconda Guerra Mondiale alle lotte contro l’apartheid sudafricano, dai Twitter-bot al software di camuffamento), questo manuale di autodifesa sintetizza le molteplici applicazioni delle odierne strategie di offuscamento, chiarendone le basi teorico filosofiche e i potenziali risultati concreti in rapporto all’avversario. E soprattutto, rivendicando l’importanza sociale dell’offuscamento come ulteriore strumento a tutela della privacy, della libertà e del dissenso nell’epoca digitale.


          Big Data    

Scopri cosa sono i Big Data
e come influenzano la tua vita (nel bene e nel male)

Come si può osservare in tempo reale l'espandersi di un'epidemia? In che modo si può prevenire il crimine e migliorare la sicurezza delle città? È possibile conoscere le emozioni e gli umori di un'intera nazione? Possono le nostre passioni minacciare pericolosamente la nostra privacy?

I big data sono la risposta a tutte queste domande: offrendo la possibilità di agire sulla totalità delle informazioni e non solo su campioni statistici, permettono di elaborare risposte più veloci, economiche e straordinariamente più precise sul mondo che ci circonda.

Come però i ricorrenti scandali "Datagate" dimostrano, aziende e istituzioni stanno sfruttando queste innovazioni tecnologiche per immagazzinare, spesso a nostra insaputa, quantità infinite di dettagli sulle nostre vite. È infatti sufficiente accedere a un social network, usare uno smartphone o semplicemente navigare sul Web, per perdere il controllo su informazioni intime e private.

Ma se si può trasformare la realtà in dati, persino i nostri interessi e le nostre sensazioni tramite i "like" di Facebook e gli aggiornamenti di Twitter, anche le nostre preferenze e i nostri gusti finiranno per essere comprati e venduti, scrutati e sanzionati.

E se queste informazioni fossero gestite male, il rischio di andare incontro a una riduzione dei nostri diritti o persino a una dittatura delle probabilità come nel celebre "Minority Report", sarebbe altissimo.

«In un decennio sono pochissimi i libri capaci di cambiare il nostro modo di vedere le cose.
Questo è uno di quelli.»
Lawrence Lessig, autore di Il futuro delle idee

«Big Data è un libro illuminante e straordinariamente tempestivo.»
The New York Times

«Nessun altro libro offre un approfondimento così equilibrato e accessibile
sui molti benefici e i pericoli della nostra crescente passione per i dati.»

The Wall Street Journal

«Big Data sarà per molti anni ancora il testo di riferimento definitivo sull'argomento.»
Forbes

Indice

1. Oggi
Lasciar parlare i dati
Più numerosi, caotici, sufficienti

2. Di più
Da alcuni a tutti

3. Confusi
I più numerosi vincono sui migliori
Il caos in azione

4. Correlazione
Previsioni e predilezioni
Illusioni e illuminazioni
Uomini contro tombini
La fine della teoria?

5. Datizzazione
Quantificare il mondo
Quando le parole si trasformano in dati
Quando la posizione si trasforma in dati
Quando le interazioni si trasformano in dati
La datizzazione di tutto quanto

6. Valore
Il «valore opzionale» dei dati
Riutilizzo dei dati
Dati ricombinanti
Dati estensibili
Ammortizzare il valore dei dati
Il valore dei dati a perdere
Il valore dei dati aperti
Stimare l'inestimabile

7. Implicazioni
La catena del valore nei big data
I nuovi intermediari dei dati
La fine dell'esperto
Una questione di utilità

8. Rischi
Paralizzare la privacy
Probabilità e punizione
La dittatura dei dati
Il lato oscuro dei big data

9. Controllo
Dalla privacy alla responsabilità
Persone contro previsioni
Decifrare la scatola nera
L'ascesa dell'algoritmista
Algoritmisti esterni
Algoritmisti interni
Tenere sotto controllo i baroni delle informazioni

10. Il futuro
Quando i dati parlano
Big data ancora più big


          Aprender, desaprender y el arte de innovar   

La innovación, en tanto que proceso activo y motor de progreso, se constituye en una constante en los diferentes sectores económicos, políticos y sociales. En una sociedad centrada en el conocimiento, el mecanismo para mantener la competitividad es la innovación. Y en el caso de la educación, hoy más que nunca, se requiere de su aplicación, porque los medios electrónicos llevan el conocimiento de manera irrestricta a todos los públicos; las nuevas generaciones entienden el mundo en ciento cuarenta caracteres; y los esfuerzos de los profesores para alcanzar los objetivos de aprendizaje requieren de maneras diferentes para relacionarse con el estudiante y con la tecnología.

De acuerdo con el Instituto de Inovação: «La innovación es la exploración con éxito de nuevas ideas» (Instituto de Inovação, 2011), lo que implica la comprensión del éxito de acuerdo con los objetivos de la organización que innova. Para las organizaciones educativas, este representa el logro de los objetivos misionales, expresados en el perfil de egreso del estudiante, por lo que se circunscribe a la denominada innovación de procesos, que es definida en el Manual de Oslo sobre innovación como «un nuevo o significativamente mejorado proceso» (OCDE y Eurostat, 2006: 59), que en el escenario de la educación incorpora tres actores principales: el profesor, el estudiante y la tecnología que se involucra en la formación.

En esta tríada se genera la innovación educativa, que implica y desarrolla las metodologías que el profesor trae al aula de clase para establecer una relación renovada con el estudiante y de este con el conocimiento; el aprovechamiento que el profesor logra sobre las tecnologías ofrecidas por la institución o sobre las identificadas por él mismo para generar ambientes de aprendizaje; y el diseño de ambientes de aprendizaje en los que el estudiante se relacione con la tecnología para alcanzar los objetivos de formación esperados. Como resultado de esta interacción, hoy en día se avanza en el desarrollo de los MOOC (Massive Open Online Course), webinars, flipped classroom, learning analytics, ecosistemas de aprendizaje (digitales y mixtos), uso de redes sociales, blogs, wikis, OCW (OpenCourseWare), entre otros. Esto sumado a la experiencia individual de muchos profesores que han dinamizado el proceso de enseñanza y aprendizaje dentro del aula, lo que los ha llevado a diseñar sus propias metodologías que permiten generar engagement (compromiso) en el estudiante.

Tipologías de la innovación

El más visible de los procesos de innovación educativa se ha dado con el apoyo de las instituciones, utilizando metodologías de innovación top-down con las que se han desarrollado —o solicitado incorporar a su cuerpo de profesores— plataformas tecnológicas para aproximar el conocimiento a los estudiantes o como parte del desarrollo de las actividades de clase. Y no solo se da en procesos de formación a distancia, con el posterior desarrollo virtual de las plataformas e-learning y la evolución hacia las metodologías blended, sino con los medios que se han utilizado para dotar las aulas de instrucción para el correcto desempeño de las metodologías presenciales, herramientas que permiten usar bases de datos, vídeos, simuladores, pizarras inteligentes, entre otros. Casos de apoyo institucional reconocidos son la disposición en internet de los materiales de clase que universidades como el MIT (Instituto Tecnológico de Massachusetts) han incorporado y ofrecido, en ocasiones, abiertamente desde el año 2000, tendencia que hoy ha sido implementada por muchas otras instituciones educativas; o la avalancha de cursos que prestigiosas universidades han diseñado y ofertado por medio de los MOOC, con la posibilidad de certificar el aprendizaje obtenido, después del primer gran éxito de 2012 del curso de Inteligencia Artificial ofrecido por la Universidad de Stanford; o la más reciente metodología de flipped classroom desarrollada por dos profesores norteamericanos de preparatoria y que hoy en día se fortalece en muchas universidades del mundo. Estos procesos, de alta visibilidad e impacto en el medio educativo, son innovaciones jalonadas de manera institucional.

Un segundo tipo de innovación educativa se da directamente en las salas de clase. Si las instituciones documentasen mejor las buenas prácticas de los profesores, descubrirían que superan las mencionadas anteriormente, tanto en el número de acciones desarrolladas como en el aprendizaje alcanzado por los estudiantes. Este trabajo realizado por innovadores entusiastas no se ha valorado suficientemente ni se han generado mediciones de su impacto real en el logro de objetivos o en la huella que ha dejado en sus estudiantes. En muchos casos, son innovaciones a las metodologías tradicionales que han permitido su reinvención desde la clase magistral, los talleres, el trabajo en equipo o las mismas metodologías clásicas centradas en el aprendizaje por casos o el cooperativo, e incluso, en el experiential learning, el PBL (problem-based learning) o el POL (project-oriented learning), que se impusieron en el ámbito educativo a lo largo del siglo XX. Esto se ha dado debido a que los esfuerzos individuales de los profesores por lograr estudiantes más comprometidos con su propio aprendizaje no siempre pasan por la tecnología, o no dependen necesariamente en un alto porcentaje de ella, sino porque buscan impactar en mayor grado en el desarrollo del perfil de los egresados, así como lograr que la misión institucional se alcance de manera más efectiva. Este tipo de innovación educativa posee, en esencia, un altísimo potencial.

En ese sentido, los profesores no deberíamos solamente esperar las decisiones de las instituciones para incorporar la tecnología como mecanismo de la innovación educativa a ejecutar, sino que debemos continuar aportando desde nuestras propias experiencias en el aula, para generar proyectos de I+D+i, en los que logremos la sistematización de la experiencia conjunta y realicemos las mediciones del aprendizaje alcanzado y el impacto sobre el perfil de egreso. Estos proyectos se deberían difundir entre colegas, de manera que se validen en colectivos docentes que los puedan comprobar (utilizar) también —construyendo y deconstruyéndolos— para luego ser difundidos como aporte desde las metodologías de acción y práctica a la renovación del proceso de enseñanza-aprendizaje. Claramente, el apoyo institucional es necesario, pero la gestión de ideas a proyectos de innovación se da desde el profesor, en procesos innovadores bottom-up.

Innovar en el océano de la información

Dentro del marco de este tipo de proyectos, la renovación de las metodologías centradas en la enseñanza —para redimensionarlas hacia el aprendizaje— y la reinvención de las metodologías centradas en el aprendizaje, son los dos campos problemáticos en los que el profesor puede dar un mayor aporte. ¿Cómo generar mayor responsabilidad del estudiante en el aprendizaje a partir de metodologías expositivas? ¿Cuál es el papel de la pizarra en las metodologías de aprendizaje activo? ¿De qué manera se puede enriquecer el experiential learning sin la incorporación de tecnología? Estas preguntas, planteadas solo como modo de invitación a la reflexión, pueden dinamizar nuestra práctica innovadora en las salas de clase, y permitir redimensionar nuestra didáctica.

Otro factor a tener en cuenta es el volumen de información que nos permite abordar la tecnología y que, en sí mismo, nos impone un reto que debe ser atendido desde la innovación educativa por profesores interesados en ello. Ya se han dado pasos por medio de las learning analytics, o en la formación de analistas simbólicos, posición solicitada por un exsecretario de trabajo de Estados Unidos (Reich, 1992), en la que se proponía una educación que permitiese al estudiante desarrollar su capacidad para identificar, intermediar y resolver problemas mediante el uso de símbolos, en el sentido de encontrar los datos, los hechos, las palabras y las representaciones más significativas para la identificación y resolución del problema en cuestión.

La innovación educativa que permita al estudiante no perderse en el océano de información que las tecnologías traen consigo, y que lo lleve a concentrarse en aquella que es relevante, es un segundo campo de innovación que debe potenciarse en el futuro próximo. En este sentido, partir de los principios del big data y de la capacidad para que esta información se convierta en aprendizaje en el proceso formativo, es, pues, otro gran campo para los profesores innovadores que piensan en el futuro de sus estudiantes.

El rol de las instituciones educativas

Las instituciones de educación también tienen una gran tarea por desarrollar desde la innovación que realiza el profesor en el aula de clase, en particular durante un período marcado por la entrada de un gran número de jugadores a su campo de actuación, que se convierten en sus competidores, y en el que los estudiantes empiezan a considerar que su éxito económico no depende obligatoriamente de los años de espera en las instituciones educativas. Hay, por lo menos, cuatro grandes acciones que estas instituciones deberían implementar para fortalecer la innovación educativa desde el aula de clase.

En primer lugar, fortalecer los grupos de docentes que trabajan en innovaciones educativas desde el aula, ofreciendo un apoyo decidido a sus proyectos y búsquedas, puede ser un mecanismo que permita superar los esfuerzos individuales e incentivar las construcciones colectivas por parte de los profesores. Adicionalmente, la definición de los proyectos de innovación en el aula de clase permite a la institución identificar estas prácticas realizadas, sistematizarlas, medir su impacto y difundirlas institucionalmente, al igual que potenciarlas como un diferenciador de su institución frente a la competencia.

Por otro lado, para las instituciones de educación, se convierte en un reto el desarrollo de metodologías de aprendizaje interdisciplinarias que se correspondan con las tendencias de formación de la actualidad, que requieren de profesionales que den nuevas respuestas a los problemas contemporáneos y que se adelanten a los futuros, los cuales trascienden una disciplina en particular. El desarrollo de la innovación educativa que incorpore este elemento en las prácticas del aula —en las que se incluyan diferentes áreas de conocimiento en la misma clase— permitirá mayores impactos en el pensamiento sistémico del estudiante. Fortalecer las búsquedas de colectivos de profesores con diferentes formaciones profesionales que planteen no solamente metodologías para el aprendizaje de su asignatura, sino que se preocupen por el aprendizaje integrado, es entonces la segunda gran oportunidad (acción) tanto para las instituciones como para los profesores que participan en ellas.

Y para lograr el desarrollo de estos proyectos colectivos, una tercera acción es necesaria: fomentar el apoyo de las instituciones a las prácticas de innovación docente en aras de asignar los recursos y la infraestructura que estos procesos requieren. Elementos que canalizados hacia los colectivos docentes permitirán un uso y costo eficientes, que están implícitos en los procesos de innovación, porque no es el éxito a cualquier precio, sino alcanzarlo manteniendo la sostenibilidad en el corto, mediano y largo plazo. Finalmente, la última acción tiene que ver con el apoyo a la conformación de redes de profesores que desarrollen proyectos de innovación educativa. Esto no solo se convierte en un estímulo, sino en un dinamizador de las búsquedas y los hallazgos de estos colectivos (en la capacidad real de compartir y difundir).

Algunos consejos

Para el profesor innovador, de lo anterior derivo estos tres consejos generales:

1. Mantén siempre la mirada puesta en el «para qué» de la formación ofrecida. Los estudiantes que están en tu clase serán los profesionales y, sobre todo, los ciudadanos del mañana. Al innovar, por tanto, hazlo partiendo de la realidad y los problemas potenciales que estos enfrentarán.

2. La innovación que realizas tiene como objeto al estudiante. Recuerda que él es el protagonista del aprendizaje. Comprender sus características te permitirá dar un aporte más relevante a su aprendizaje y construir el engagement que tanto buscas con él. Entiende al estudiante como sustento de tu innovación.

3. La innovación es dinámica. Lo que hoy es nuevo, mañana deja de serlo. Por eso, no dejes de aprender y desaprender constantemente. Cada logro alcanzado en tu proceso innovador, es el reto para la siguiente innovación.

Bibliografía

Instituto Inovação. (2011). Introducción a la innovación. Campinas: Instituto Inovação.

OCDE, & EUROSTAT. (13 de julio de 2015). Manual de Oslo. Guía para la recogida e interpretación de datos sobre innovación. Recuperado de http://www.uis.unesco.org/Library/Documents/OEC DOsloManual05_spa.pdf

Reich, R. B. (1992). The Work of Nations: Preparing Ourselves for 21th Century Capitalism. Nueva York: Vintage.

Lea el contenido original en este enlace.


          Big Data Engineer   
none
          Grand Ventures’ First Investment is in Big Data Startup Astronomer   
Grand Ventures, the West Michigan venture capital firm investing in early-stage tech startups, has closed its first deal. The firm contributed an undisclosed amount to Cincinnati-based big data startup Astronomer’s $3.5 million seed round. The funding was led by San Francisco’s Wireframe Ventures and Ohio’s CincyTech; other investors include Frontline Ventures, Drummond Road Capital, CoreNetwork […]
          Per governare i big data serve un’azione coordinata a livello nazionale e internazionale, dice il presidente dell’Agcom Cardani. Ma ci sono rischi per la privacy   
Nella governance dei Big Data è “necessario intervento coordinato nazionale ed internazionale”. Ne è convinto il presidente dell’Autorità per le [...]
          Big Data Solution Architect - GlaxoSmithKline - Upper Providence, PA   
Broad knowledge of pharmaceutical R&D and/or practical experience in a scientific domain (e.g. Experience designing and building complex, high performance...
From GlaxoSmithKline - Sun, 11 Jun 2017 06:59:54 GMT - View all Upper Providence, PA jobs
          DevOps Engineer - GlaxoSmithKline - Upper Providence, PA   
Provide direction for hardware, OS, and associated components for the platform whilst operating in a regulated pharmaceutical environment. Big Data Experience:....
From GlaxoSmithKline - Fri, 19 May 2017 18:39:00 GMT - View all Upper Providence, PA jobs
          Data Scientist - Data Insights & Analytics - 99 TAXIS - São Paulo, SP   
Has been worked with big data technologies like Storm, Samza, Hadoop, Presto, Airflow; 99's mission is to make transportation cheaper, faster and more efficient...
De 99 TAXIS - Sat, 17 Jun 2017 08:59:04 GMT - Visualizar todas as empregos: São Paulo, SP
          Data Scientist - Growth - 99 TAXIS - São Paulo, SP   
Has been worked with big data technologies like Storm, Samza, Hadoop, Presto, Airflow; 99's mission is to make transportation cheaper, faster and more efficient...
De 99 TAXIS - Sat, 10 Jun 2017 05:51:08 GMT - Visualizar todas as empregos: São Paulo, SP
          Data Scientist - RA - CSI - 99 TAXIS - São Paulo, SP   
Has been worked with big data technologies like Storm, Samza, Hadoop, Presto, Airflow; 99's mission is to make transportation cheaper, faster and more efficient...
De 99 TAXIS - Sat, 10 Jun 2017 05:51:08 GMT - Visualizar todas as empregos: São Paulo, SP
          Data Engineer - 99 TAXIS - São Paulo, SP   
Has been worked with big data technologies like Storm, Samza, Hadoop, Presto, Airflow; 99's mission is to make transportation cheaper, faster and more efficient...
De 99 TAXIS - Sat, 10 Jun 2017 05:51:07 GMT - Visualizar todas as empregos: São Paulo, SP
          Data Engineer - Growth - 99 TAXIS - São Paulo, SP   
Have been worked with big data technologies like Storm, Samza, Hadoop, Presto, Airflow; 99's mission is to make transportation cheaper, faster and more...
De 99 TAXIS - Sat, 10 Jun 2017 05:51:06 GMT - Visualizar todas as empregos: São Paulo, SP
          Data Scientist - Mkt Place Routing - 99 TAXIS - São Paulo, SP   
Has been worked with big data technologies like Storm, Samza, Hadoop, Presto, Airflow; 99's mission is to make transportation cheaper, faster and more efficient...
De 99 TAXIS - Sat, 10 Jun 2017 05:51:05 GMT - Visualizar todas as empregos: São Paulo, SP
          What is Your Best Approach to Decision Making?   

Thanks to computer technology, software and apps, more and more companies rely on big data to drive their business models. Leaders develop strategies using information compiled and analyzed by computers. Despite all of these advances, there still needs to be a human element behind decision-making in corporations. Experts touted by Harvard Business Review detail the best approach when figuring out how to move forward with a particular strategy.

Elements That Come Together

Three main elements come together when decision-making happens with all of these technology tools in the workplace. First, computers must compile the raw information. Sales figures, revenue, time to market, overhead, supply costs and labor expenses are all raw figures that computers work well with since those machines are great at doing math. Second, a program must analyze the numbers and organize them into usable information. This is when analytics software pays for itself.

The third aspect is that a human must employ decision-making to know what to do with the data. The program created a beautiful chart showing how price affects revenue over time, but what do company leaders do with that information? Business leaders must understand how the software compiles the information and what the numbers mean. People who can't figure out how to handle the data suffer from "big data, little brain," according to HBR.

How to Mitigate the Problem

Experts believe the best approach to alleviate any problems that come from relying on data too much is that business leaders should go with their gut and experience every once in a while. Finding this balance doesn't mean eschewing information altogether, but it's more about knowing what data to pay attention to when decision-making comes into play.

Ironically, there is a way to figure this out using a different kind of computer program. An agent-based simulation, or ABS, examines the behavior of humans, crunches the numbers and then makes a predictive model of what may happen. The U.S. Navy developed ABS in the 1970s when it examined the everyday behavior of 30,000 sailors. This type of program is gaining more widespread use due to better computing power.

There is a ton of information that computers must take into account to make these predictive models. When ABS first started, simulations ran for a year before getting results. In 2017, that timeframe is down to one day by comparison.

ABS uses information from the analytics software and applies it to customer behavior models and decision-making to predict what may happen in the future. This type of program answers what happens when a company changes prices of products, if a competitor adapts to market forces in a certain way and what customers may do when you change a product.

ABS can't predict everything, but it does take into account human expertise. ABS, like any analytics software, is only as good as the data it collects. It makes decisions more transparent because it supports the notion that if a company moves in one direction, then a certain scenario is likely to happen. You must remember to take a risk no matter what path you're on.

Decision-making shouldn't be all data or all going with your guts. However, data gathering certainly makes it easier thanks to the technological tools available to businesses.


Photo courtesy of Stuart Miles at FreeDigitalPhotos.net


          RavenPack Helps Hedge Funds Deal With Their Big Data 'Hoarding Disorder'   
The default setting for financial firms today is to hold on to data, like something from Hoarders. Firms have difficulty parting with in-house data because they don't know what is OK to delete. To be on the safe side, especially from a compliance perspective, they just save everything. Given the availability of ever cheaper computing power and storage, the direct cost of following such a strategy is fairly low.
          Scrum Master   
Mastech is a growing company dedicated to innovation and teamwork. We are currently seeking a Scrum Master for our client in the IT Services domain. We value our professionals, providing comprehensive benefits, exciting challenges, and the opportunity for growth. This is a Contract position and the client is looking for someone to start immediately.

Duration: 6 Months Contract
Location: Alpharetta, GA / Zip Code: 30004
Compensation: Market Rate

Role: Scrum Master

Role Description: The Scrum Master would need to have at least 5+ years of experience.

Required Skills:

- Big Data Background.

Education: Bachelor's Degree
Experience: Minimum 5+ years
Relocation: No, this position will not cover relocation expenses
Travel: No
Local Preferred: Yes

Recruiter Name: Ekta Bhattacharya
Recruiter Phone: 877 884 8834 (Ext: 2021)

EOE
          Project/Program Manager   
Mastech is a growing company dedicated to innovation and teamwork. We are currently seeking a Project/Program Manager for our client in the IT Services domain. We value our professionals, providing comprehensive benefits, exciting challenges, and the opportunity for growth. This is a Contract position and the client is looking for someone to start immediately.

Duration: 6 Months Contract
Location: Alpharetta, GA / Zip Code: 30004
Compensation: Market Rate

Role: Project/Program Manager

Role Description: The Project/Program Manager would need to have at least 5+ years of experience.

Required Skills:

- Big Data Background.

Education: Bachelor's Degree
Experience: Minimum 5+ years
Relocation: No, this position will not cover relocation expenses
Travel: No
Local Preferred: Yes

Recruiter Name: Abhishek Malik
Recruiter Phone: 412 436 0333 (Ext: 2306)

EOE
          Senior UI Engineer   
<span>**Please contact me at 415 228 4275 if you have any questions about the opportunity**<br>Modis&rsquo;s client is looking for a Senior UI Engineer. This could be a contract to hire or fulltime opportunity. Client has locations in San Rafael and San Jose.<br>Roles and Responsibilities: This is a unique opportunity to be a key player in an organization that is at the forefront of growing field of Big Data Analytics. You will be responsible for helping to architect, design, implement, and test key user interfaces for our analytic tools in the Analytic Cloud. You will also be engaged at the early stages and have the ability to develop and shape a new architecture. You will also be exposed to and integrate with number of 3rd party, and open source tools and technologies that will provide you with a unique opportunity to grow you skills and have fun while working on a dynamic and close knit team.<br>Working Conditions: You will work with diverse team of very talented developers who take pride in their work, and value their customers. They are driven to see their products succeed in the marketplace and work as a team to accomplish their goals.<br>Required Experience: &nbsp;&nbsp;&nbsp;<br>Education and or Certifications: Bachelors Degree or higher in Computer Science or related discipline.<br>Experience and Qualifications: Experienced Software Engineer with 5+ years of strong professional development experience in Web development with HTML5, CSS, JavaScript, and general Web 2.0 technologies<br>Technical Skills and Abilities: &bull; Experience with JavaScript-based libraries such as ExtJS, JQuery, Bootstrap, Knockout, AngularJS, Backbone.js, YUI, and D3.js<br>&bull; Strong JavaScript and Java problem solving, debugging, and performance tuning skills<br>&bull; Good knowledge of object oriented analysis &amp; design<br>&bull; Strong instincts and background in creating simple, clean and powerful user interfaces.<br>&bull; Experience with Platform as a service (PAAS) environments and API&rsquo;s such as OpenShift and Cloud Foundry a big plus<br>&bull; Experience with web services (SOAP/REST), and SOA is a plus<br>&bull; Background in using statistical analysis/modeling tools such as SAS, SPSS, R, etc. is desirable<br>&bull; Background in using Business Intelligence and Data Mining tools desirable<br>&bull; Background in scripting languages such as Groovy, Python, Perl, or Ruby is a plus<br>&bull; Excellent oral and written communication skills.&bull; Capability to provide technical leadership to the team&bull; Experience with agile development processes and tools desirable<br>&nbsp;<br></span>
          Roskilde-app: Vil kigge ind i gæsternes fremtid   
De seneste to år har blandt andre IBM arbejdet med big data på Roskilde Festival. Indtil nu har de kunnet spore koncertgæsterne, og se deres vaner. Nu vil de forudsige fremtiden.
          Sr. Systems Engineer - Great Work Environment   
This Sr. Systems Engineer Position Features:
? Great Work Environment
? Interesting Work
? Dynamic And Growing Company
? Great Pay to $110K

Immediate need for sr. systems engineer seeking great work environment, interesting work and dynamic and growing company. Experience with appdynamics, experience with shell and python and willing to travel 50% to thousand oaks will be keys to success in this growing, dynamic organization. Will be responsible for experience with cloud and big data technologies, experience with automation tools and experience with architecture solutions for Computer/IT Services company. Great benefits. Apply for this great position as a sr. systems engineer today! We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Sr. Systems Engineer - Great Work Environment   
This Sr. Systems Engineer Position Features:
? Great Work Environment
? Interesting Work
? Dynamic And Growing Company
? Great Pay to $110K

Immediate need for sr. systems engineer seeking great work environment, interesting work and dynamic and growing company. Experience with appdynamics, experience with shell and python and willing to travel 50% to thousand oaks will be keys to success in this growing, dynamic organization. Will be responsible for experience with cloud and big data technologies, experience with automation tools and experience with architecture solutions for Computer/IT Services company. Great benefits. Apply for this great position as a sr. systems engineer today! We are an equal employment opportunity employer and will consider all qualified candidates without regard to disability or protected veteran status.
          Hiring for Consultant - SAS Analytics - Banking/credit Card Domain - Iit/isi/dse in Delhi/NCR(Nation (Delhi Job)   
Job Description:- Need 1-2 years of experience in analytics - Experience in SAS/SQL and Excel is must - Experience in Big Data / Hadoop is preferable - Experience in banking/credit card domain is good to have - Need good communication skills to interact d...
          Hiring for Consultant - SAS Analytics - Banking/credit Card Domain - Iit/isi/dse in Delhi/NCR(Nation (Delhi Job)   
Job Description:- Need 1-2 years of experience in analytics - Experience in SAS/SQL and Excel is must - Experience in Big Data / Hadoop is preferable - Experience in banking/credit card domain is good to have - Need good communication skills to interact d...
          Sr. Business Analyst/Big Data   
Kansas City, Genesis10 is actively seeking a Sr. Business Analyst/Big Data resource for a contract position opening (contract term approximately 6 months, with possible extension). This position is within the financial services industry for our client located in Kansas City, MO. Description: Come join the Big Data team as a Senior Business Analyst on one of H&R Block's newest data platforms. Responsible for wo
          Republican data analytics firm exposes voting records on 198 million Americans   

Researcher Chris Vickery has discovered nearly 200 million voter records in an unsecured Amazon S3 bucket maintained by Deep Root Analytics (DRA), a big data analytics firm that helps advertisers identify audiences for political ads.

The data was discovered on June 12, and secured two days later after Vickery reported the incident to federal regulators.

Salted Hash has been down this road before with Vickery, who is now a Cyber Risk Analyst for UpGuard. In 2015, Vickery discovered 191 million voter records being stored in an unsecured database.

To read this article in full or to leave a comment, please click here


          Symposium on Big Data in Finance, Retail and Commerce – Lisboa, 2-3 November 2017   
Symposium on Big Data in Finance, Retail and Commerce Statistical and Computational Challenges Date: 2-3 November 2017 Local: Jupiter Lisboa Hotel  Avenida da República, 46,  1050-195 Lisboa, PORTUGAL Invited Speakers: Cristina San José, Banco Santander, España João Marques da SIlva, Universidade de Lisboa, Portugal Marc Hallin, Université Libre de Bruxelles, Belgium  Nuno Crato,  University of Lisbon, Portugal, and European Commission Joint Research Center, Italy […]
          Comment on Trash talk: Why Big Data needs to focus its use case to succeed by One man's trash is another man's big data - SuccessfulWorkplace   
[…] post first appeared on IT Redux and has been lightly […]
          Comment on Tearing down the ivory tower: Can Big Data succeed where BPM has failed ? by Hey, that's not my job - SuccessfulWorkplace   
[…] Tearing down the ivory tower: Can Big Data succeed where BPM has failed? by Theo Priestly […]
          Data Scientist Principal Engineer -RELO to OH   
Akron, If you are a Data Scientist Principal Engineer with experience, please read on! Top Reasons to Work with Us 1. An awesome opportunity to work with Big Data & be part of high profile aviation projects for government & defense 2. We are privately held company, we continue to grow our Data Engineering team well into 2017 What You Will Be Doing Our Engineering team is developing multiple products for
          Big Data Developer - Verizon - Burlington, MA   
Experience with one of Storm or Apache Spark Experience with NOSQL databases ( Preferably Mongo DB or Cassandra) Ability to adapt and learn quickly in fast...
From Verizon - Thu, 29 Jun 2017 10:58:16 GMT - View all Burlington, MA jobs
          Data Engineer - CreditCards.com - Austin, TX   
Bankrate Credit Cards is seeking an experienced big data engineer. We help people get the most out of their money through smart credit card recommendations
From CreditCards.com - Fri, 12 May 2017 22:18:08 GMT - View all Austin, TX jobs
          Business Intelligence Manager - Verizon - Basking Ridge, NJ   
The enthusiasm in this position helps mitigate fraud, identify misconduct and control gaps utilizing cutting edge Big Data technologies and modern investigative...
From Verizon - Thu, 29 Jun 2017 10:58:19 GMT - View all Basking Ridge, NJ jobs
          La Dra. Encarna Ruiz publica la seva tesi doctoral, "Blogs de Moda: del periodisme al marketing"   

El llibre, tesis doctoral de la professora de l'ESDi i directora de l'ESDiColor_Lab, la Dra. Encarna Ruiz se centra en un estudi de la blogosfera espanyola i dels elements comunicatius que la defineixen.Si bé es veuen progressivament substituïts per altres plataformes com Instagram o YouTube, els blogs de moda han aconseguit constituir-se com a part definitòria de la indústria local, tant en la seva dinàmica personal amb els lectors com en el seu vincle amb marques de roba.     Conscient de l'impacte del fenomen, la Dra. Encarna Ruiz, professora de l'ESDi i directora del laboratori de tendències ESDiColor_Lab, va agafar aquests espais digitals com focus per a la seva tesi doctoral en Periodisme, "Blogs de Moda: del periodisme al marketing", ara publicat sota el mateix títol per l'Editorial Académica Española."En aquesta tesi vaig barrejar les meves inquietuds personals amb el meu treball a l'ESDi, ja que em dedico a les assignatures de sociologia i comunicació, i estic molt vinculada a l'àmbit de la moda a partir de les tendències d'ESDiColor_Lab. En determinat moment, vaig començar a adonar-me que la sociologia em permetia interpretar com la moda arribava a un col·lectiu que començava a consumir, en aquesta edat de l'hiperconsum, allò que es prescrivia des d'alguns llocs web", senyala Ruiz.Començant a final del 2010 i estenent-se fins al 2013, el seu estudi es va concretar en el que ella denomina la instància d'explosió dels blogs de moda, que Ruiz va analitzar des de les eines d'anàlisi de discurs i d'estudi semiòtic. "A Espanya, en aquesta època, va començar a crear-se un fenomen interessant, el dels egoblogs, en el que qualsevol persona s'atreveix a posar en marxa un blog, amb fotografies pròpies i sense formació en moda, redacció o fotografia", identifica Ruiz, discurs que, no obstant això, va ser aprofitat per la indústria per a cridar l'atenció dels consumidors i promoure la venda online.Una de les millors aportacions, en aquest sentit, és la classificació de diferents categories de blog, en la que els egoblogs contrasten amb els blogs de comunicació, que formen part de les newsletters d'empreses, i els blogs d'autor, d'individus especialitzats en moda, però que opinen de forma personalitzada, i que després van ser incorporats per alguns mitjans com Vogue y Elle, que aviat deixaren de veure'ls com competència.En el moment que jo vaig fer la meva tesi, el big data era un dels grans desconeguts per la indústria. Ara és una de les possibilitats de les manifestacions més importants", afegeix Ruiz, considerant les possibilitats d'expansió del seu treball. "Hi Ha hagut una gran evolució dels blogs a pàgines que són molt ràpides, molt instantànies, amb més capacitat viral que els blogs, i amb major impacte en el seu vincle amb les marques. Ara, si hagués d'ampliar el meu estudi, em centraria en Instagram", conclou.

La Dra. Encarna Ruiz publica la seva tesi doctoral,

més info


          Big Data Developer - Verizon - Burlington, MA   
Development experience on distributed micro services in Java/JVM, Python environment. 7+ years of development experience on distributed micro services in Java...
From Verizon - Thu, 29 Jun 2017 10:58:16 GMT - View all Burlington, MA jobs
          DevOps Engineer - GlaxoSmithKline - Upper Providence, PA   
The DevOps Engineer will require a deep understanding of key Big Data software applications and operating systems. Big Data Experience:....
From GlaxoSmithKline - Fri, 19 May 2017 18:39:00 GMT - View all Upper Providence, PA jobs
          Web Marketing Coordinator with AdWords Experience - StrongBox Data Solutions - Quebec   
Coordinate social media outreach. Looking to be part of the future in Big Data management?... $40,000 - $50,000 a year
From StrongBox Data Solutions - Fri, 24 Mar 2017 20:57:05 GMT - View all Quebec jobs
          Sales & Service Associate   
ID-Boise, CenturyLink (NYSE: CTL) is a global communications, hosting, cloud and IT services company enabling millions of customers to transform their businesses and their lives through innovative technology solutions. CenturyLink offers network and data systems management, Big Data analytics and IT consulting, and operates more than 55 data centers in North America, Europe and Asia. The company provides br
          Del Big Data bancari a la personalització del servei   
Article d’opinió d’Albert Banal-Estañol, professor titular de la Universitat Pompeu Fabra i director acadèmic de l’MSC in Corporate Finance and Banking A la UPF Barcelona School of Management Durant els [...]
          La gestió dels Big Data un nou repte estratègic   
Si des de l’existència dels ordinadors, fem un breu recorregut històric, veiem que la recopilació de dades era al principi una qüestió dels treballadors en les empreses. Aquestes dades, en [...]
          Big Data Solution Architect - GlaxoSmithKline - Upper Providence, PA   
Job Category - Enterprise Architecture:. Proven experience in solution architecture processes, practices, and guidelines....
From GlaxoSmithKline - Sun, 11 Jun 2017 06:59:54 GMT - View all Upper Providence, PA jobs
          (USA-WA-Redmond) Principal Software Eng Lead   
Billions of users. Billions of dollars. Millions of products. Planet-wide reach. Sell everything: apps, games, hardware. It’s not Amazon. It’s Microsoft Marketplace Services. If you are excited by working on software at the scale of one of the largest online marketplaces in the world then this is your dream job. The Universal Store team in WDG is developing some of Microsoft’s largest scale and business critical cloud services. These services have a huge global footprint of over 240 markets and process millions of transactions daily, with loads growing linearly as Microsoft moves to a “cloud first”, “mobile first” strategy. The platform powers all of Microsoft’s key services - Windows App Store, Windows Phone, XBOX, Bing Ads, Office 365, Azure to name just a few. Whether renting a movie or buying a game on Xbox LIVE, purchasing an app on a Windows or Windows Phone device, signing up for an Office 365 subscription or paying for Azure services, you are using the Universal Store platform. We are the Marketplace Services team and are responsible for the services which power Microsoft’s marketplace. We specialize in a massive transactional infrastructure that handles the core commerce functionality of Xbox Live, Windows and Phone services. Every time a customer searches, buys, or gets a license for a product in the marketplace, they are using a Marketplace Service. If you have a gamertag, or if you’ve used your Xbox, Windows Store or Windows Phone to purchase a game, a song, a video, a subscription, or an app, then you’re one of our customers. We’re looking for great developers writing excellent software. If you have a history of designing, owning and shipping software, as well as excellent communication and collaboration skills, then we want to talk to you. You should have a solid understanding of the software development cycle, from architecture to testing. You’ll have a passion for quality and be a creative thinker. You’ll write secure, reliable, scalable, and maintainable code, and then effectively debug it, test it and support it live. You should be comfortable owning a feature and making decisions independently. We expect an Engineering Lead on the team to: 1. Be a great manager, mentor, and leader of the team and broader organization. You will manage a team of between 5 and 8 developers. 2. Be a great process engineer. You will be accountable to the design, implementation, schedule, delivery, of your team’s services. Doing this in an efficient way is a must. 3. Earn the technical respect of the people on your team. This is a technical position and the ideal candidate should be capable of working in the code, supporting the service, and understanding at a detailed level how the software works. 4. Be the software architect: we will look to you to have an informed opinion about what and how the software should evolve and have the ability to land your plan in the division. Experience building scalable cloud services, distributed systems, and/or database systems would be a definite plus. Required Qualifications: • 10+ years demonstrated experience in designing and developing enterprise-level internet scale services/solutions. Preferred Qualifications: • Strong design and programming skills in C# or Java. • Experience leading and influencing virtual teams of developers to deliver global, highly available, highly scalable services. • Understanding and experience in Machine Learning a plus. • Expert hands-on software development expertise including object oriented design skills, .NET etc. • Excellent analytical skills • Excellent communication skills, including the ability to write concise and accurate technical documentation, communicate technical ideas to non-technical audiences, and to lead development teams. • Demonstrated ability to impact/influence engineering and project teams • Proven track record with industry level influence of shipping highly-scalable and reliable services/systems • Ability to work independently and in a team setting and be able to research innovative solutions for challenging business/technical problems • Solid technical aptitude and problem solving skills, take initiative, and must be result driven strong debugging skills • Expert understanding of Engineering Excellence processes and requirements • Expertise and knowledge in modern engineering practices (Continuous Integration, TDD, automated deployments with integrated quality gates) • Experience with Cloud Platforms like Windows Azure Platform, Amazon AWS, Google Apps • Experience with Big Data/Analytic Systems like Cosmos or Hadoop implementations • Experience in eCommerce and/or Supply Chain • Master or Bachelor degree in Computer Science, Computer Engineering, or equivalent with related engineering experience is required. Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to askstaff@microsoft.com. Development (engineering)
          Red Alert saves lives   
Learn how Quantium and My Choices Foundation have teamed up to eliminate sex trafficking with Big Data using Cisco technology. ...
          Big Data Design Architect   

          Oracle ADF Online Training in Hyderabad,US & UK | Leotrainings   
Leo trainings is the first-rate online training & lecture room coaching Institute for Java in united states of america,UK and INDIA. We are provide collections,multithreading,exception handling,Jdbc,Servlets and Jsp. This institute no longer just for Java,delivering all programming Languages & actual time specialists. It is largely used in Oraganization Programming languages. ourse Objective Summary During this course, you will learn: Introduction to Big Data and Analytics Introduction to... $150
          Senior Cloud Engineer/Big Data Architect / Data Science - Corporate Technology - New York, NY   
Perform data analysis with business, understanding business process and structuring both relational and distributed data sets....
From Corporate Technology - Wed, 14 Jun 2017 16:40:06 GMT - View all New York, NY jobs
          CENTER SALES AND SERVICE ASSOCIATE-CARE   
ID-Idaho Falls, CenturyLink (NYSE: CTL) is a global communications and IT services company focused on connecting its customers to the power of the digital world. CenturyLink offers network and data systems management, big data analytics, managed security services, hosting, cloud, and IT consulting services. The company provides broadband, voice, video, advanced data and managed network services over a robust 265,
          Network Technician   
ID-Idaho Falls, CenturyLink (NYSE: CTL) is a global communications and IT services company focused on connecting its customers to the power of the digital world. CenturyLink offers network and data systems management, big data analytics, managed security services, hosting, cloud, and IT consulting services. The company provides broadband, voice, video, advanced data and managed network services over a robust 265,
          Python/Big Data Developer - Daily Rate Contract / Stelfox Ltd / Dublin, Dublin, Ireland   
Stelfox Ltd/Dublin, Dublin, Ireland

The Role: Big Data Developer - Python

The Contract: 6-Months Rolling

The Location: Dublin

Our client have a requirement for a senior developer to join them in building a strategic platform using Python. This is a high-exposure role, working cross-functionally with teams in the US. You will work in a highly agile fashion, with flexibility around team organisation and structure. You will have the chance to learn more about FS Python, & Big Data technologies while developing with a leading-edge tech house within the financial industry.

The client are looking for a developer with the following:

Development experience in Python

Agile project experience

Experience in banking/finance sector

Experience in any Big Data technologies (Hadoop/Spark/MapR etc)

A degree or equivalent experience

*Urgent Requirement* - If you are interested in this role please get in touch with Micaela ASAP.

Employment Type: Contract
Duration: 6 months rolling

Apply To Job
          Comment on Analytics specialists discuss profit oriented data strategies by Edward M. Bury, APR   
Compelling perspective on "big data." I work in the transportation research industry, and transit agencies are grappling with the access of large amounts of data available in real time -- something they didn't have years ago. How to aggregate it? How will it make transit safer, more effective?
          Big Data Solution Architect - GlaxoSmithKline - Upper Providence, PA   
Job Category - Enterprise Architecture:. Proven experience in solution architecture processes, practices, and guidelines....
From GlaxoSmithKline - Sun, 11 Jun 2017 06:59:54 GMT - View all Upper Providence, PA jobs
          Welcome to the Yell Blog   

Thanks for visiting the Yell Blog, the official hub for company news, product announcements and technical insight from Yell.

 

Yell is proud to be one of the country's biggest providers of digital marketing services, which includes the UK's leading online business directory Yell.com, and we’re excited to share updates from our team with you.

Join the Yell community

Follow the Yell Blog to be the first to hear about company news and product updates, get insight into our business operation and discover dynamic new digital features designed to make your business and relationship with Yell even better. We’ll also be regularly sharing infographics and big data research from our R&D team, showing you what’s trending on Yell.com and across local business searches.

 

If you’re also interested in the codecraft that goes into making Yell.com, make sure to check out our Tech channel. Here, Yell’s engineering team share regular posts about the technology news that interests them and their experiences building the various digital products we have here at Yell.

Local business owner?

If you own a business and are looking for digital marketing advice to help your company succeed, head over to the Yell Business Knowledge Centre for tips on website design, SEO and social media from our team of independent digital marketing experts.

 

And, if you're looking get your business listed on Yell.com, fill in our free online listing form and start reaching out to customers today.

Follow us

Finally, don’t forget to keep up to date with all the latest news at Yell by following us on Twitter, Facebook and LinkedIn.

 

Thanks for reading, we look forward to having you in the Yell community.

The post Welcome to the Yell Blog appeared first on The Yell Blog.


          Transformation and Growth Opportunities in the US Next-Generation Sequencing Informatics Market, 2021 - Research and Markets   
...the author expects a significant growth in mergers, acquisitions, and partnerships. At the same time, given the growing applications of healthcare Big Data tools in genomics data interpretation, the market is expected to witness the entry of several healthcare information technology ...

          Senior Software Engineer for Big Data & Data Analitycs SEAD-0617-VM - Smartercom Italia srl - Milano Nord, Lombardia   
Smartercom Italia, azienda specializzata in ambito TLC e consulenza ICT, per ampliamento gruppi di lavoro presso propri clienti in Vimercate (Monza Brianza),
Da Indeed - Thu, 29 Jun 2017 11:32:09 GMT - Visualizza tutte le offerte di lavoro a Milano Nord, Lombardia
          Software Engineer Big Data & Data Analitycs - Smartercom Italia srl - Milano, Lombardia   
Smartercom Italia, azienda specializzata in ambito TLC e consulenza ICT, per ampliamento gruppi di lavoro presso propri clienti in Vimercate (Monza Brianza),
Da IProgrammatori.it - Thu, 29 Jun 2017 12:55:53 GMT - Visualizza tutte le offerte di lavoro a Milano, Lombardia
          Comment on How in-memory computing helps enterprises overcome Big Data woes by Nick N   
You should look into what Symbolic IO is doing and maybe write a post about them. Seems like they found a way to take advantage of in-memory computing with not just the use of software but with the efficient combination of hardware components.
          Univa Powers Formula 1 Racing and more at ISC 2017   

In this video from ISC 2017, Bill Bryce from Univa describes how the company's software powers simulation for everything from Formula 1 racing to manufacturing and life sciences. "Univa is the leading innovator in workload management and containerization solutions. Univa's suite includes the world's most trusted workload optimization solution enabling organizations to manage and optimize distributed applications, containers, data center services, legacy applications, and Big Data frameworks in a single, dynamically shared set of resources."

The post Univa Powers Formula 1 Racing and more at ISC 2017 appeared first on insideHPC.


          Oracle Launch Webcast - New Big Data Cloud Service: Big Data Cloud Machine - June 28, 19:00 CET   

During the 1-hour launch webcast on June 28 at 19:00 CET, key speakers from Oracle, Intel and Cloudera will discuss the latest Oracle Cloud at Customer offering empowering customers to seamlessly shift their big data and analytics initiatives to the cloud.

Join Paul Sonderegger, Big Data Strategist at Oracle, along with key speakers from Intel and Cloudera for an in-depth webcast to hear about the latest Oracle Cloud at Customer offering, empowering customers to seamlessly shift their big data and analytics initiatives to the cloud. With Oracle’s newest big data cloud service for Hadoop and Spark, enterprises can leverage their data to maximize profitability and drive a competitive advantage with their data capital.

Join us to hear:

  • How enterprises are gaining competitive advantages leveraging their data capital
  • How Oracle’s Big Data Cloud Service offering accelerates time-to-value 
  • Customers discuss their own Big Data success stories
  • Leading analytics partners discuss the value they bring to Oracle’s newest Big Data Cloud Service
  • About the broad open source and analytics portfolio support with Oracle’s new Big Data Cloud Service
  • How to start your Big Data journey with Oracle’s newest cloud offering

Don’t miss this launch webcast. Register today!


          Database Insider - June 2017 issue   

The June issue of the Database Insider newsletter is now available.

Some of the articles in this issue include

The New World of Database Technologies

The world of data management in 2017 is diverse, complex and challenging. The industry is changing, the way that we work is changing, and the underlying technologies that we rely upon are changing. During this transition, two trends will continue to dominate data management discussions: the adoption of new "big data" technologies and the movement to the cloud. Download this report to learn about the key developments and emerging best practices for tackling the challenges and opportunities in databases today. 
Download the white paper.

New! Oracle Database Audio White Papers

Get a long commute? You can now listen to the new Oracle Database Audio White Papers and learn the latest technology while you're on the road. Topics include Oracle Multitenant, Transforming Data Management with Oracle Database 12c Release 2, and Best Practices for IoT Workloads. It's like having you favorite Oracle Database Product Manager in your pocket! 
Visit Oracle Database Audio White Papers channel. 
Read Maria Colgan's blog for an overview.

Oracle Cloud Platform Innovation Awards: Calling All Oracle Cloud Innovators

Do you use Oracle Cloud Platform, specifically Oracle Database Cloud Services, Exadata Cloud Service or Big Data Cloud Services, to deliver unique business value? If so, nominate your organization for the 2017 Oracle Excellence Award for Oracle Cloud Platform Innovation by July 10. Winners will be honored during a special event at Oracle OpenWorld (October 1-5) in San Francisco. 
Learn more about Oracle Cloud Platform Innovation Awards. 
Nominate your organization.

 

Oracle Database Cloud Services "How To" Series

June – September, Americas and EMEA time zones 

Delivered by Oracle Database product management and development, this webcast series is your guide to success with Oracle Database Cloud Services. Presenters will give you architectural insight, technical guidance, and practical steps to help you gain better knowledge in Oracle Database Cloud Service. No matter you are a DBA, a developer, an application architect, a cloud architect, or an IT director, you'll benefit from the technical tips and best practices offered in this webcast series. Mark your calendar for upcoming webcasts!

 

Read the full newsletter here

 


          Big Data Solution Architect - GlaxoSmithKline - Upper Providence, PA   
Programming languages like Java, Scala, C++. Job Category - Enterprise Architecture:. Experience designing and building complex, high performance platforms to...
From GlaxoSmithKline - Sun, 11 Jun 2017 06:59:54 GMT - View all Upper Providence, PA jobs
          DevOps Engineer - GlaxoSmithKline - Upper Providence, PA   
Big Data Experience: • 4+ years of administration experience on Big Data Hadoop Ecosystem • In-depth knowledge of capacity planning, performance management...
From GlaxoSmithKline - Fri, 19 May 2017 18:39:00 GMT - View all Upper Providence, PA jobs
          COMMUNICATIONS TECH   
WI-Darlington, CenturyLink (NYSE: CTL) is a global communications and IT services company focused on connecting its customers to the power of the digital world. CenturyLink offers network and data systems management, big data analytics, managed security services, hosting, cloud, and IT consulting services. The company provides broadband, voice, video, advanced data and managed network services over a robust 265,
          CENTRAL OFFICE TECH   
AZ-Tucson, CenturyLink (NYSE: CTL) is a global communications and IT services company focused on connecting its customers to the power of the digital world. CenturyLink offers network and data systems management, big data analytics, managed security services, hosting, cloud, and IT consulting services. The company provides broadband, voice, video, advanced data and managed network services over a robust 265,
          Plant Technician   
WI-Hayward, CenturyLink (NYSE: CTL) is a global communications and IT services company focused on connecting its customers to the power of the digital world. CenturyLink offers network and data systems management, big data analytics, managed security services, hosting, cloud, and IT consulting services. The company provides broadband, voice, video, advanced data and managed network services over a robust 265,
          Digital Business Integration Manager - Big Data - Accenture - Canada   
Choose Accenture, and make delivering innovative work part of your extraordinary career. Join Accenture and help transform leading organizations and communities...
From Accenture - Tue, 13 Jun 2017 02:38:27 GMT - View all Canada jobs
          Intuition Is The Antidote To Big Data   
Big data can much more biased than executives realize. The antidote to such bias is human intuition.
          Cómo utilizar Big Data para mejorar tu tienda Prestashop   
En los últimos tiempos seguramente hayas oído hablar mucho de Big Data y de las ventajas que ofrece en el mundo empresarial. Y es que la posibilidad de utilizar herramientas para análisis específicos hace que sea posible conocer mejor a nuestro público objetivo, fidelizar a los clientes, proporcionar una mejor experiencia dentro de la web, […]
          “Door AI kunnen marketeers relevanter zijn, terwijl gebruikers minder data hoeven af te staan”   
Jeremy Waite, evangelist bij IBM, demonstreerde op WebTomorrow de Watson Marketing Assistant. Na zijn keynote sprak ik nog even met hem verder. Het werd een levendig gesprek over het verband tussen big data en AI, de dood van de CMO en het verslaan van spammers door simpelweg een beter mens te zijn.
Lees meer over: “Door AI kunnen marketeers relevanter zijn, terwijl gebruikers minder data hoeven af te staan”.
          How universities can use big data to land grads careers   
BY ROB SPARKS, eCampus News Have faculty, administrators and advisors actually prepared their students for the “real world” and aligned programs, degrees and training with the job market? Without diminishing the quality of the academic program, have students made the right choices to fulfill their ambitions and aspirations and begin their contributions to society? For decades, [...]
          Big Data Solution Architect - GlaxoSmithKline - Upper Providence, PA   
This role requires interaction with process owners across Pharma R&amp;D teams. Experience designing and building complex, high performance platforms to support Big...
From GlaxoSmithKline - Sun, 11 Jun 2017 06:59:54 GMT - View all Upper Providence, PA jobs
          Обзор IBM DB2, часть 6: Функции разграничения и контроля доступа IBM DB2 LUW   

Обзор аналитических функций СУБД IBM DB2 для платформ Linux, Unix и Windows

Эта статья является шестой в серии, описывающей архитектуру, возможности и особенности системы управления базами данных (СУБД) IBM DB2 для платформ Linux, Unix и Windows (DB2 LUW). Темой статьи является обзор функций DB2 по построению аналитических хранилищ данных, применение технологии DB2 BLU для высокоскоростной «In-Memory» аналитики и функций DB2 DPF (Database Partitioning Feature) для горизонтального масштабирования аналитических конфигураций.

Возможности IBM DB2 для построения аналитических хранилищ данных

Продукт IBM DB2 для платформ Linux, Unix and Windows (DB2 LUW) содержит множество функций и возможностей, предназначенных для построения высокопроизводительных аналитических хранилищ данных любого масштаба. В числе основных возможностей DB2 LUW по реализации высокоскоростной аналитической обработки данных следует отметить:

  • DB2 BLU - технологию современной «In-Memory» по-колоночной обработки данных, обеспечивающей высокую производительность выполнения сложных запросов без ручного тюнинга физических структур под особенности постоянно меняющихся запросов;
  • DB2 DPF (Database Partitioning Feature) - технологию горизонтального масштабирования хранилищ данных DB2 на основе массивно-параллельной обработки данных (MPP);
  • MDC (Multi-Dimensional Clustering tables) и партиционирование таблиц (Table Partitioning) - средства разделения логически целостной таблицы, а также связанных с этой таблицей индексов, на несколько физических разделов данных, с ускорением доступа к данным и операций обслуживания за счёт сравнительно небольшого размера каждого из разделов (по сравнению с полным размером таблицы);
  • MQT (Materialized Query Tables) - технологию предварительного расчёта, сохранения и последующего многократного использования значений сложных показателей, с поддержкой операций агрегирования данных, объединения данных из нескольких таблиц, выборки необходимого подмножества.

Перечисленные выше технологии предназначены, в первую очередь, для оптимизации физической модели хранилища данных, и позволяют обеспечить необходимую производительность выполнения даже самых сложных запросов. Большинство этих технологий можно использовать совместно (например, DB2 BLU может использоваться в горизонтально-масштабируемых конфигурациях DB2 DPF, а технология MQT может использоваться совместно с технологией партиционирования таблиц для обеспечения эффективного доступа к большим таблицам с расчётными показателями).

Ключевым программным компонентом DB2, обеспечивающим отвечающим за производительность выполнения сложных запросов, является оптимизатор. Оптимизатор обеспечивает выбор плана выполнения запроса на основе текста запроса, информации о физической организации данных (включая наличие индексов, применяемую схему партиционирования и другие особенности), статистики о наполнении таблиц и индексов данными (в том числе информации о количестве записей, а также статистического распределения значений колонок). Выбор эффективного плана выполнения запроса позволяет выполнить запрос быстрее и с минимальными затратами вычислительных ресурсов, тогда как неудачный план выполнения означает сравнительно большую продолжительность выполнения запроса и избыточные затраты вычислительных ресурсов.

В транзакционных задачах эффективное выполнение запроса в первую очередь зависит от наличия необходимых индексов, исключающих необходимость в полном сканировании записей таблиц. При выполнении более сложных (сводных, аналитических) запросов задача оптимизатора значительно усложняется из-за необходимости соединения нескольких таблиц, группировки записей и расчета агрегированных показателей, вычисления значений «оконных функций». Правильно выбранная физическая организация данных, позволяющая эффективно выполнять необходимые запросы, а также наличие актуальных статистик критически необходимо для поддержания высокой производительности сложных запросов.

Современные технологии «In-Memory» аналитики, реализованные в функционале DB2 BLU, позволяют исключить либо минимизировать необходимость в ручной оптимизации физической организации данных под особенности выполняемых запросов, за счёт использования специальной по-колоночной организации хранения данных, сжатия данных и обработки сжатых данных, активного использования пропуска ненужной информации при сканировании и агрессивного параллелизма выполнения.

Обзор возможностей технологии DB2 BLU для «In-Memory» аналитики

Технология DB2 BLU доступна в продукте IBM DB2 с 2013 года, и явилась основным функциональным новшеством IBM DB2 версии 10.5. С этого времени DB2 BLU постоянно развивается, обзор новшеств DB2 BLU в версии 11.1 можно прочитать в официальной документации, а также в статье на сайте IBM Big Data & Analytics Hub.

DB2 BLU расширяет возможности IBM DB2 функциями хранения и обработки данных в по-колоночном формате, с поддержкой параллельной «векторной» обработки по-колоночных данных. Использование по-колоночного формата хранения в сочетании со схемой данных «звезда» для витрин данных позволяет сократить объем хранения, улучшить производительность выполнения запросов и резко сократить потребность в ручном тюнинге физической организации данных и самих запросов (по сравнению с созданием аналогичной схемы данных на основе по-строчной организации данных). По-колоночная организация хранения также с успехом применяется для других моделей данных.

Таблица DB2 может быть создана в по-строчном (традиционном) либо в по-колоночном формате, путём указания одного из спецификаторов ORGANIZE BY ROW либо ORGANIZE BY COLUMN в операторе CREATE TABLE. Используемая по умолчанию (без указания спецификатора) организация таблицы зависит от параметра базы данных DFT_TABLE_ORG, который можно установить в значения ROW или COLUMN.

К созданным в по-колоночном формате таблицам можно применять все штатные операторы выборки и модификации данных, включая SELECT, INSERT/UPDATE/DELETE, MERGE, а также утилиты IMPORT/EXPORT, INGEST и LOAD для выполнения запросов и операций загрузки, изменения и удаления данных. В запросах можно обращаться одновременно к таблицам по-строчной и по-колоночной организации, хотя для достижения наилучшей производительности лучше использовать выборку из по-колоночных таблиц.

 

Иногда можно услышать утверждение, что DB2 BLU - это «ненастоящая» In-Memory, поскольку DB2 допускает загрузку данных с диска в процессе работы и вытеснение ранее прочитанных данных из оперативной памяти.

На это утверждение можно ответить, что основными технологиями, стоящими за маркетинговым термином In-Memory, являются:

  • поколоночное хранение данных,
  • пропуск ненужных блоков данных,
  • сжатие хранимой информации и обработка данных в сжатом виде,
  • параллелизм выполнения и оптимизация обработки под устройство кэшей центрального процессора.

Все перечисленные выше технологии в полном объёме и с высокой эффективностью реализованы в DB2 BLU, что позволяет этой технологии успешно конкурировать в тестах производительности с конкурирующими решениями.

Жёсткое же требование по размещению всех данных строго в оперативной памяти при обработке приводит к завышенным требованиям по характеристикам оборудования.

Например, для обеспечения сравнительно редких (и не имеющих жёсткого регламента выполнения) запросов DB2 может задействовать, при нехватке оперативной памяти, чтение части редко используемых данных с дисков и сохранение промежуточных результатов на диски. Некоторые конкурирующие системы в подобных ситуациях либо не используют преимущества по-колоночной обработки данных (работают на по-строчной копии таблиц), либо вовсе завершают выполнение запросов с ошибками из-за нехватки оперативной памяти.

 

Высокая производительность запросов к по-колоночным таблицам обеспечивается за счёт одновременного использования нескольких техник, нацеленных на сокращение объёма обрабатываемой информации и увеличение параллелизма выполнения:

  • По-колоночное хранение данных позволяет исключить сканирование и обработку ненужных колонок. При наличии таблиц с большим количеством колонок (например, таблицы фактов с показателями) большинство запросов используют лишь незначительную часть всех колонок.
  • Для каждой по-колоночной таблицы DB2 автоматически создаёт и поддерживает в актуальном состоянии специальную индексную структуру данных, которая называется «Synopsis Table» и содержит информацию о минимальных и максимальных значениях всех колонок в блоках примерно из 1000 записей. Эта структура данных позволяет пропускать ненужные блоки записей при наличии ограничений на значения соответствующих колонок.
  • Данные в по-колоночных таблицах хранятся в сжатом виде, типовые степени сжатия на реальных данных - ориентировочно от 3 до 10 раз. Работа со сжатыми данными позволяет сократить объем хранения данных, требуемый объём оперативной памяти для кэширования и количество операций ввода-вывода при доступе к данным.
  • За счёт использования специального метода сжатия данных, вычисление предикатов при выполнении запросов осуществляется без разжатия/декомпрессии, т.е. результат операций сравнения (меньше, больше, равно) значений колонок друг с другом и с константами может быть вычислен без дополнительных затрат ресурсов процессора.
  • При обработке по-колоночных данных активно используются возможности параллельной обработки: сканирование различных таблиц, колонок и даже отдельных блоков выполняется параллельно с использованием нескольких процессорных ядер. Кроме того, сканирование единичного блока данных осуществляется с использованием SIMD-инструкции (Single Instruction Multiple Data) современных процессоров x86 и Power, что позволяет за одну инструкцию обработать несколько значений (например, вычислить результат сравнения с предикатом).
  • Необходимые для выполнения запроса данные загружаются в в буферные пулы асинхронно и в параллельном режиме, что в большинстве случаев исключает необходимость ожидания чтения данных при выполнении запросов. Состав загружаемой информации определяет специально разработанный алгоритм, который предсказывает, какая именно информация потребуется в соответствии с выбранным планом выполнения запроса.

В настоящее время созданные в по-колоночном формате таблицы не поддерживают создания таких вспомогательных объектов, как вторичные индексы (поддерживаются только ограничения уникальности с использованием синтаксиса PRIMARY KEY и UNIQUE) и триггеры. Это ограничение будет, скорее всего, снято в ближайшей перспективе, что расширит возможности по созданию гибридных транзакционно-аналитических баз данных.

Для обеспечения в базе данных поддержки работы с таблицами по-колоночной организации необходимо установить ряд настроек. Если характер рабочих нагрузок на базу данных носит преимущественно аналитический характер (например, база данных аналитических витрин), для упрощения настройки перед созданием базы данных можно установить переменную реестра DB2 DB2_WORKLOAD в значение ANALYTICS, пример команды:
    db2set DB2_WORKLOAD=ANALYTICS

Процедура ручной настройки базы данных для поддержки работы с таблицами по-колоночной организации детально описана в документации DB2. Основные требования включают в себя:

  • использование базой данных кодировки Unicode либо ISO8859-1 (кодовая страница 819), с порядком сортировки IDENTITY либо IDENTITY_16BIT;
  • «ручное» (не автоматическое) управление памятью сортировки: параметры SORTHEAP и SHEAPTHRES_SHR не должны быть установлены в значение AUTOMATIC;
  • параметр SHEAPTHRES должен быть установлен в значение 0, тем самым переключая базу данных в режим совместного использования памяти сортировки (shared sort).

Ручной вариант настройки является более гибким и позволяет создать базу данных, одновременно приспособленную для выполнения как транзакционных, так и аналитических нагрузок, в соответствии с концепцией гибридной транзакционной и аналитической обработки (Hybrid Transactional and Analytical Processing, HTAP).

Ключевым свойством DB2 BLU является простота применения технологии. При использовании DB2 BLU полностью исключаются либо минимизируются ручные действия администратора по подбору индексов, выбору вариантов физической организации данных, настройке запросов. В то же самое время, администраторам и разработчикам не нужно осваивать какие-либо новые навыки и инструменты:

  • создание по-колоночных таблиц выполняется в обычном синтаксисе, причём опция ORGANIZE BY COLUMN может автоматически использоваться без явного указания в командах CREATE TABLE, в зависимости от настроек базы данных;
  • загрузка и модификация данных осуществляется обычными методами, поддерживаемыми DB2 (операторы INSERT, UPDATE и MERGE, утилиты IMPORT, LOAD и INGEST);
  • в отношении по-колоночных таблиц поддерживается полный синтаксис языка SQL для выполнения запросов, причём в одном запросе можно комбинировать как по-колоночные, так и по-строчные запросы (в случае использования по-строчных таблиц в запросе DB2 выполняет максимально возможную часть запроса в по-колоночном In-Memory режиме, после чего переключается на обычный по-строчный режим обработки);
  • операции с по-колоночными запросами полностью поддерживаются всеми инструментами управления и мониторинга DB2, включая инструменты накопления статистики о скорости выполнения и использовании ресурсов, средства построения планов выполнения запросов, инструменты управления приоритетами рабочих нагрузок.

В итоге достаточно создать таблицы в поколоночном формате и загрузить в них данные с помощью обычных методов, после чего подавляющее большинство сложных аналитических запросов будет выполняться со стабильно высокой производительностью.

При работе с DB2 BLU следует с осторожностью использовать единичные операторы вставки, обновления и удаления данных. Общим свойством всех по-колоночных баз данных является сравнительно низкая производительность выполнения таких операций. Кроме того, большое количество мелких модификаций может приводить к снижению эффективности заполнения «Synopsis»-таблиц, размер которых может расти непропорционально объёму хранимой информации основной по-колоночной таблицы. Новейшие версии DB2 значительно улучшили эффективность выполнения единичных изменений данных в по-колоночных таблицах, однако для по-колоночных таблиц эти операции по-прежнему выполняются значительно медленнее, чем для по-строчных.

Важной и полезной функцией DB2 BLU являются так называемые теневые таблицы (Shadow Tables), представляющие собой асинхронно обновляемые по-колоночные копии обычных по-строчных таблиц. При использовании теневых таблиц транзакционные приложения продолжают работать с обычными по-строчно организованными данными. Изменения в по-сторчных таблицах переносятся в по-колоночные копии с помощью механизма репликации данных, практически в режиме реального времени (важно помнить, что задержка при переносе изменений всё-таки существует). При выполнении сложных запросов (например, со стороны инструментов формирования регламентной или произвольной отчётности), оптимизатор DB2 автоматически использует доступные по-колоночные копии данных в теневых таблицах для выполнения таких запросов с использованием технологии DB2 BLU.

Механизм теневых таблиц позволяет исключить необходимость доработки физической структуры данных транзакционного приложения для выполнения сложных запросов аналитических приложений, и при этом обеспечить быстрое выполнение сложных запросов над наиболее актуальной, оперативно обновляемой информацией.

Приведу пример SQL-скрипта для создания по-колоночной таблицы и загрузки в неё данных из существующей по-строчной таблицы оператором LOAD.

CREATE TABLE test.blutab1 AS (SELECT * FROM test.nortab1)
    WITH NO DATA ORGANIZE BY COLUMN;
DECLARE x1 CURSOR FOR SELECT * FROM test.nortab1;
LOAD FROM x1 OF CURSOR REPLACE INTO test.blutab1;

Заключение

В данной статье были рассмотрены возможности IBM DB2 в области построения аналитических хранилищ данных. Описана технология поколоночного хранения и обработки информации (DB2 BLU), средства горизонтального масштабирования аналитических конфигураций (DB2 DPF), применение партиционирования таблиц и индексов для ускорения запросов и повышения эффективности процедур преобразования и загрузки данных, а также применение материализованных вычисляемых таблиц (MQT) для ускорения выполнения аналитических запросов.

 

 


          The GWAS hoax....or was it a hoax? Is it a hoax?   
A long time ago, in 2000, in Nature Genetics, Joe Terwilliger and I critiqued the idea then being pushed by the powers-that-be, that the genomewide mapping of complex diseases was going to be straightforward, because of the 'theory' (that is, rationale) then being proposed that common variants caused common disease.  At one point, the idea was that only about 50,000 markers would be needed to map any such trait in any global populations.  I and collaborators can claim that in several papers in prominent journals, in a 1992 Cambridge Press book, Genetic Variation and Human Disease, and many times on this blog we have pointed out numerous reasons, based on what we know about evolution, why this was going to be a largely empty promise.  It has been inconvenient for this message to be heard, much less heeded, for reasons we've also discussed in many blog posts.

Before we get into that, it's important to note that unlike me, Joe has moved on to other things, like helping Dennis Rodman's diplomatic efforts in North Korea (here, Joe's shaking hands as he arrives in his most recent trip).  Well, I'm more boring by far, so I guess I'll carry on with my message for today.....




There's now a new paper, coining a new catch-word (omnigenic), to proclaim the major finding that complex traits are genetically complex.  The paper seems solid and clearly worthy of note.  The authors examine the chromosomal distribution of sites that seem to affect a trait, in various ways including chromosomal conformation.  They argue, convincingly, that mapping shows that complex traits are affected by sites strewn across the genome, and they provide a discussion of the pattern and findings.

The authors claim an 'expanded' view of complex traits, and as far as that goes it is justified in detail. What they are adding to the current picture is the idea that mapped traits are affected by 'core' genes but that other regions spread across the genome also contribute. In my view the idea of core genes is largely either obvious (as a toy example, the levels of insulin will relate to the insulin gene) or the concept will be shown to be unclear.  I say this because one can probably always retroactively identify mapped locations and proclaim 'core' elements, but why should any genome region that affects a trait be considered 'non-core'?

In any case, that would be just a semantic point if it were not predictably the phrase that launched a thousand grant applications.  I think neither the basic claim of conceptual novelty, nor the breathless exploitive treatment of it by the news media, are warranted: we've known these basic facts about genomic complexity for a long time, even if the new analysis provides other ways to find or characterize the multiplicity of contributing genome regions.  This assumes that mapping markers are close enough to functionally relevant sites that the latter can be found, and that the unmappable fraction of the heritability isn't leading to over-interpretation of what is 'mapped' (reached significance) or that what isn't won't change the picture.

However, I think the first thing we really need to do is understand the futility of thinking of complex traits as genetic in the 'precision genomic medicine' sense, and the last thing we need is yet another slogan by which hands can remain clasped around billions of dollars for Big Data resting on false promises.  Yet even the new paper itself ends with the ritual ploy, the assertion of the essential need for more information--this time, on gene regulatory networks.  I think it's already safe to assure any reader that these, too, will prove to be as obvious and as elusively ephemeral as genome wide association studies (GWAS) have been.

So was GWAS a hoax on the public?
No!  We've had a theory of complex (quantitative) traits since the early 1900s.  Other authors argued similarly, but RA Fisher's famous 1918 paper is the typical landmark paper.  His theory was, simply put, that infinitely many genome sites contribute to quantitative (what we now call polygenic) traits.  The general model has jibed with the age-old experience of breeders who have used empirical strategies to improve crop, or pets species.  Since association mapping (GWAS) became practicable, they have used mapping-related genotypes to help select animals for breeding; but genomic causation is so complex and changeable that they've recognized even this will have to be regularly updated.

But when genomewide mapping of complex traits was first really done (a prime example being BRCA genes and breast cancer) it seemed that apparently complex traits might, after all, have mappable genetic causes. BRCA1 was found by linkage mapping in multiply affected families (an important point!), in which a strong-effect allele was segregating.  The use of association mapping  was a tool of convenience: it used random samples (like cases vs controls) because one could hardly get sufficient multiply affected families for every trait one wanted to study.  GWAS rested on the assumption that genetic variants were identical by descent from common ancestral mutations, so that a current-day sample captured the latest descendants of an implied deep family: quite a conceptual coup based on the ability to identify association marker alleles across the genome identical by descent from the un-studied shared remote ancestors.

Until it was tried, we really didn't know how tractable such mapping of complex traits might be. Perhaps heritability estimates based on quantitative statistical models was hiding what really could be enumerable, replicable causes, in which case mapping could lead us to functionally relevant genes. It was certainly worth a try!

But it was quickly clear that this was in important ways a fool's errand.  Yes, some good things were to be found here and there, but the hoped-for miracle findings generally weren't there to be found. This, however, was a success not a failure!  It showed us what the genomic causal landscape looked like, in real data rather than just Fisher's theoretical imagination.  It was real science.  It was in the public interest.

But that was then.  It taught us its lessons, in clear terms (of which the new paper provides some detailed aspects).  But it long ago reached the point of diminishing returns.  In that sense, it's time to move on.

So, then, is GWAS a hoax?
Here, the answer must now be 'yes'!  Once the lesson is learned, bluntly speaking, continuing on is more a matter of keeping the funds flowing than profound new insights.  Anyone paying attention should by now know very well what the GWAS etc. lessons have been: complex traits are not genetic in the usual sense of being due to tractable, replicable genetic causation.  Omnigenic traits, the new catchword, will prove the same.

There may not literally be infinitely many contributing sites as in the original statistical models, be they core or peripheral, but infinitely many isn't so far off.  Hundreds or thousands of sites, and accounting for only a fraction of the heritability means essentially infinitely many contributors, for any practical purposes.  This is particularly so since the set is not a closed one:  new mutations are always arising and current variants dying away, and along with somatic mutation, the number of contributing sites is open ended, and not enumerable within or among samples.

The problem is actually worse.  All these data are retrospective statistical fits to samples of past outcomes (e.g., sampled individuals' blood pressures, or cases' vs controls' genotypes).  Past experience is not an automatic prediction of future risk.  Future mutations are not predicable, not even in principle.  Future environments and lifestyles, including major climatic dislocations, wars, epidemics and the like are not predictable, not even in principle.  Future somatic mutations are not predictable, not even in principle.

GWAS almost uniformly have found (1) different mapping results in different samples or populations, (2) only a fraction of heritability is accounted for by tens, hundreds, or even thousands of genome locations and (3) even relatively replicable 'major' contributors, themselves usually (though not always) small in their absolute effect, have widely varying risk effects among samples.

These facts are all entirely expectable based on evolutionary considerations, and they have long been known, both in principle, indirectly, and from detailed mapping of complex traits.  There are other well-known reasons why, based on evolutionary considerations, among other things, this kind of picture should be expected.  They involve the blatantly obvious redundancy in genetic causation, which is the result of the origin of genes by duplication and the highly complex pathways to our traits, among other things.  We've written about them here in the past.  So, given what we now know, more of this kind of Big Data is a hoax, and as such, a drain on public resources and, perhaps worse, on the public trust in science.

What 'omnigenic' might really mean is interesting.  It could mean that we're pressing up ever more intensely against the log-jam of understanding based on an enumerative gestalt about genetics.  Ever more detail, always promising that if we just enumerate and catalog just a bit (in this case, the authors say we need to study gene regulatory networks) more we'll understand.  But that is a failure to ask the right question: why and how could every trait be affected by every part of the genome?  Until someone starts looking at the deeper mysteries we've been identifying, we won't have the transormative insight that seems to be called for, in my view.

To use Kuhn's term, this really is normal science pressing up against a conceptual barrier, in my view. The authors work the details, but there's scant hint they recognize we need something more than more of the same.  What is called for, I think is young people who haven't already been propagandized about the current way of thinking, the current grantsmanship path to careers.

Perhaps more importantly, I think the situation is at present an especially cruel hoax, because there are real health problems, and real, tragic, truly genetic diseases that a major shift in public funding could enable real science to address.
          Some genetic non-sense about nonsense genes   
The April 12 issue of Nature has a research report and a main article about what is basically presented as the discovery that people typically carry doubly knocked-out genes, but show no effect. The idea as presented in the editorial (p 171) notes that the report (p235) uses an inbred population to isolate double knockout genes (that is, recessive homozygous null mutations), and look at their effects.  The population sampled, from Pakistan, has high levels of consanguineous marriages.  The criteria for a knockout mutation was based on the protein coding sequence.

We have no reason to question the technical accuracy of the papers, nor their relevance to biomedical and other genetics, but there are reasons to assert that this is nothing newly discovered, and that the story misses the really central point that should, I think, be undermining the expensive Big Data/GWAS approach to biological causation.

First, for some years now there have been reports of samples of individual humans (perhaps also of yeast, but I can't recall specifically) in which both copies of a gene appear to be inactivated.  The criteria for saying so are generally indirect, based on nonsense, frameshift, or splice-site mutations in the protein code.  That is, there are other aspects of coding regions that may be relevant to whether this is a truly thorough search to see that whatever is coded really is non-functional.  The authors mention some of these.  But, basically, costly as it is, this is science on the cheap because it clearly only addresses some aspects of gene functionality.  It would obviously be almost impossible to show either that the gene was never expressed or never worked. For our purposes here, we need not question the finding itself.  The fact that this is not a first discovery does raise the question why a journal like Nature is so desperate for Dramatic Finding stories, since this one really should be instead a report in one of many specialty human genetics journals.

Secondly, there are causes other than coding mutations for gene inactivation. They have to do with regulatory sequences, and inactivating mutations in that part of a gene's functional structure is much more difficult, if not impossible, to detect with any completeness.  A gene's coding sequence itself may seem fine, but its regulatory sequences may simply not enable it to be expressed. Gene regulation depends on epigenetic DNA modification as well as multiple transcription factor binding sites, as well as the functional aspects of the many proteins required to activate a gene, and other aspects of the local DNA environment (such as RNA editing or RNA interference).  The point here is that there are likely to be many other instances of people with complete or effectively complete double knockouts of genes.

Thirdly, the assertion that these double KOs have no effect depends on various assumptions.  Mainly, it assumes that the sampled individuals will not, in the future, experience the otherwise-expected phenotypic effects of their defunct genes.  Effects may depend on age, sex, and environmental effects rather than necessarily being a congenital yes/no functional effect.

Fourthly, there may be many coding mutations that make the protein non-functional, but these are ignored by this sort of study because they aren't clear knockout mutations, yet they are in whatever data are used for comparison of phenotypic outcomes.  There are post-translational modification, RNA editing, RNA modification, and other aspects of a 'gene' that this is not picking up.

Fifthly, and by far most important, I think, is that this is the tip of the iceberg of redundancy in genetic functions.  In that sense, the current paper is a kind of factoid that reflects what GWAS has been showing in great, if implicit, detail for a long time: there is great complexity and redundancy in biological functions.  Individual mapped genes typically affect trait values or disease risks only slightly.  Different combinations of variants at tens, hundreds, or even thousands of genome sites can yield essentially the same phenotype (and here we ignore the environment which makes things even more causally blurred).

Sixthly, other samples and certainly other populations, as well as individuals within the Pakistani data base, surely carry various aspects of redundant pathways, from plenty of them to none.  Indeed, the inbreeding that was used in this study obviously affects the rest of the genome, and there's no particular way to know in what way, or more importantly, in which individuals.  The authors found a number of basically trivial or no-effect results as it is, even after their hunt across the genome. Whether some individuals had an attributable effect of a particular double knockout is problematic at best.  Every sample, even of the same population, and certainly of other populations, will have different background genotypes (homozygous or not), so this is largely a fishing expedition in a particular pond that cannot seriously be extrapolated to other samples.

Finally, this study cannot address the effect of somatic mutation on phenotypes and their risk of occurrence.  Who knows how many local tissues have experienced double-knockout mutations and produced (or not produced) some disease or other phenotype outcome.  Constitutive genome sequencing cannot detect this.  Surely we should know this very inconvenient fact by now!

Given the well-documented and pervasive biological redundancy, it is not any sort of surprise that some genes can be non-functional and the individual phenotypically within a viable, normal range. Not only is this not a surprise, especially by now in the history of genetics, but its most important implication is that our Big Data genetic reductionistic experiment has been very successful!  It has, or should have, shown us that we are not going to be getting our money's worth from that approach.  It will yield some predictions in the sense of retrospective data fitting to case-control or other GWAS-like samples, and it will be trumpeted as a Big Success, but such findings, even if wholly correct, cannot yield reliable true predictions of future risk.

Does environment, by any chance, affect the studied traits?  We have, in principle, no way to know what environmental exposures (or somatic mutations) will be like.  The by now very well documented leaf-litter of rare and/or small-effect variants plagues GWAS for practical statistical reasons (and is why usually only a fraction of heritability is accounted for).  Naturally, finding a single doubly inactivated gene may, but by no means need, yield reliable trait predictions.

By now, we know of many individual genes whose coded function is so proximate or central to some trait that mutations in such genes can have predictable effects.  This is the case with many of the classical 'Mendelian' disorders and traits that we've known for decades.  Molecular methods have admirably identified the gene and mutations in it whose effects are understandable in functional terms (for example, because the mutation destroys a key aspect of a coded protein's function).  Examples are Huntington's disease, PKU, cystic fibrosis, and many others.

However, these are at best the exceptions that lured us to think that even more complex, often late-onset traits would be mappable so that we could parlay massive investment in computerized data sets into solid predictions and identify the 'druggable' genes-for that Big Pharma could target.  This was predictably an illusion, as some of us were saying long ago and for the right reasons.  Everyone should know better now, and this paper just reinforces the point, to the extent that one can assert that it's the political economic aspects of science funding, science careers, and hungry publications, and not the science itself, that leads to the persistence of drives to continue or expand the same methods anyway.  Naturally (or should one say reflexively?), the authors advocate a huge Human Knockout Project to study every gene--today's reflex Big Data proposal.**

Instead, it's clearly time to recognize the relative futility of this, and change gears to more focused problems that might actually punch their weight in real genetic solutions!

** [NOTE added in a revision.  We should have a wealth of data by now, from many different inbred mouse and other animal strains, and from specific knockout experiments in such animals, to know that the findings of the Pakistani family paper are to be expected.  About 1/4 to 1/3 of knockout experiments in mice have no effect or not the same effect as in humans, or have no or different effect in other inbred mouse strains.  How many times do we have to learn the same lesson?  Indeed, with existing genomewide sequence databases from many species, one can search for 2KO'ed genes.  We don't really need a new megaproject to have lots of comparable data.]


          Reforming research funding and universities   
Any aspect of society needs to be examined on a continual basis to see how it could be improved.  University research, such as that which depends on grants from the National Institutes of Health, is one area that needs reform. It has gradually become an enormous, money-directed, and largely self-serving industry, and its need for external grant funding turns science into a factory-like industry, which undermines what science should be about, advancing knowledge for the benefit of society.  

The Trump policy, if there is one, is unclear, as with much of what he says on the spur of the moment. He's threatened to reduce the NIH budget, but he's also said to favor an increase, so it's hard to know whether this represents whims du jour or policy.  But regardless of what comes from on high, it is clear to many of us with experience in the system that health and other science research has become very costly relative to its promise and too largely mechanical rather than inspired.

For these reasons, it is worth considering what reforms could be taken--knowing that changing the direction of a dependency behemoth like NIH research funding has to be slow because too many people's self-interests will be threatened--if we were to deliver in a more targeted and cost-efficient way on what researchers promise.  Here's a list of some changes that are long overdue.  In what follows, I have a few FYI asides for readers who are unfamiliar with the issues.

1.  Reduce grant overhead amounts
FYI:  Federal grants come with direct and indirect costs.  Direct costs pay the research staff, the supplies and equipment, travel and collecting data and so on.  Indirect costs are worked out for each university, and are awarded on top of the direct costs--and given to the university administrators.  If I get $100,000 on a grant, my university will get $50,000 or more, sometimes even more than $100K.  Their claim to this money is that they have to provide the labs, libraries, electricity, water, administrative support and so on, for the project, and that without the project they'd not have these expenses. Indeed, an indicator of the fat that is in overhead is that as an 'incentive' or 'reward', some overhead is returned as extra cash to the investigator who generated it.]

University administrations have notoriously been ballooning.  Administrators and their often fancy offices depend on individual grant overhead, which naturally puts intense pressure on faculty members to 'deliver'.  Educational institutions should be lean and efficient. Universities should pay for their own buildings and libraries and pare back bureaucracy. Some combination of state support, donations, and bloc grants could be developed to cover infrastructure, if not tied to individual projects or investigators' grants. 

2.  No faculty salaries on grants
FYI:  Federal grants, from NIH at least, allow faculty investigators' salaries to be paid from grant funds.  That means that in many health-science universities, the university itself is paying only a fraction, often tiny and perhaps sometimes none, of their faculty's salaries.  Faculty without salary-paying grants will be paid some fraction of their purported salaries and often for a limited time only.  And salaries generate overhead, so they're now well paid: higher pay, higher overhead for administrators!  Duh, a no-brainer!]

Universities should pay their faculty's salaries from their own resources.   Originally, grant reimbursement for faculty investigators' salaries were, in my understanding, paid on grants so the University could hire temporary faculty to do the PI's teaching and administrative obligations while s/he was doing the research.  Otherwise, if they're already paid to do research, what's the need? Faculty salaries paid on grants should only be allowed to be used in this way, not just as a source of cash.  Faculty should not be paid on soft money, because the need to hustle one's salary steadily is an obvious corrupting force on scientific originality and creativity. 

3.  Limit on how much external funding any faculty member or lab could have
There is far too much reward for empire-builders. Some do, or at least started out doing, really good work, but that's not always the case and diminishing returns for expanding cost is typical.  One consequence is that new faculty are getting reduced teaching and administrative duties so they can (must!) write grant applications. Research empires are typically too large to be effective and often have absentee PIs off hustling, and are under pressure to keep the factory running.  That understandably generates intense pressure to play it safe (though claiming to be innovative); but good science is not a predictable factory product. 

4.  A unified national health database
We need health care reform, and if we had a single national health database it would reduce medical costs and could be anonymized so research could be done, by any qualified person, without additional grants.  One can question the research value of such huge databases, as is true even of the current ad hoc database systems we pay for, but they would at least be cost-effective.

5. Temper the growth ethic 
We are over-producing PhDs, and this is largely to satisfy the game of the current faculty by which status is gained by large labs.  There are too many graduate students and post-docs for the long-term job market.  This is taking a heavy personal toll on aspiring scientists.  Meanwhile, there is inertia at the top, where we have been prevented from imposing mandatory retirement ages.  Amicably changing this system will be hard and will require creative thinking; but it won't be as cruel as the system we have now.

6. An end to deceptive publication characteristics  
We routinely see papers listing more authors than there are residents in the NY phone book.  This is pure careerism in our factory-production mode.  As once was the standard, every author should in principle be able to explain his/her paper on short notice.  I've heard 15 minutes. Those who helped on a paper such as by providing some DNA samples, should be acknowledged, but not listed as authors. Dividing papers into least-publishable-units isn't new, but with the proliferation of journals, it's out of hand.  Limiting CV lengths (and not including grants on them) when it comes to promotion and tenure could focus researchers' attention on doing what's really important rather than chaff-building.  Chairs and Deans would have to recognize this, and move away from safe but gameable bean-counting.  

FYI: We've moved towards judging people internally, and sometimes externally in grant applications, on the quantity of their publications rather than the quality, or on supposedly 'objective' (computer-tallied) citation counts.  This is play-it-safe bureaucracy and obviously encourages CV padding, which is reinforced by the proliferation of for-profit publishing.  Of course some people are both highly successful in the real scientific sense of making a major discovery, as well as in publishing their work.  But it is naive not to realize that many, often the big players grant-wise, manipulate any counting-based system.  For example, they can cite their own work in ways that increase the 'citation count' that Deans see.  Papers with very many authors also lead to red-claiming that is highly exaggerated relative to the actual scientific contribution.  Scientists quickly learn how to manipulate such 'objective' evaluation systems.] 

7.  No more too-big-and-too-long-to-kill projects
The Manhattan Project and many others taught us that if we propose huge, open-ended projects we can have funding for life.  That's what the 'omics era and other epidemiological projects reflect today.  But projects that are so big they become politically invulnerable rarely continue to deliver the goods.  Of course, the PIs, the founders and subsequent generations, naturally cry that stopping their important project after having invested so much money will be wasteful!  But it's not as wasteful as continuing to invest in diminishing returns.  Project duration should be limited and known to all from the beginning.

8.  A re-recognition that science addressing focal questions is the best science
Really good science is risky because serious new findings can't be ordered up like hamburgers at McD's.  We have to allow scientists to try things.  Most ideas won't go anywhere.  But we don't have to allow open-ended 'projects' to scale up interminably as has been the case in the 'Big Data' era, where despite often-forced claims and PR spin, most of those projects don't go very far, either, though by their size alone they generate a blizzard of results. 

9. Stopping rules need to be in place  
For many multi-year or large-scale projects, an honest assessment part-way through would show that the original question or hypothesis was wrong or won't be answered.  Such a project (and its funds) should have to be ended when it is clear that its promise will not be met.  It should be a credit to an investigator who acknowledges that an idea just isn't working out, and those who don't should be barred for some years from further federal funding.  This is not a radical new idea: it is precedented in the drug trial area, and we should do the same in research.  

It should be routine for universities to provide continuity funding for productive investigators so they don't have to cling to go-nowhere projects. Faculty investigators should always have an operating budget so that they can do research without an active external grant.  Right now, they have to piggy-back their next idea by using funds in their current grant, and without internal continuity funding, this is naturally leads to safe 'fundable'  projects, rather than really innovative ones.  The reality is that truly innovative projects typically are not funded, because it's easy for grant review panels to fault-find and move on the safer proposals.

10. Research funding should not be a university welfare program
Universities are important to society and need support.  Universities as well as scientists become entrenched.  It's natural.  But society deserves something for its funding generosity, and one of the facts of funding life could be that funds move.  Scientists shouldn't have a lock on funding any more than anybody else. Universities should be structured so they are not addicted to external funding on grants. Will this threaten jobs?  Most people in society have to deal with that, and scientists are generally very skilled people, so if one area of research shrinks others will expand.

11.  Rein in costly science publishing
Science publishing has become what one might call a greedy racket.  There are far too many journals, rushing out half-way reviewed papers for pay-as-you-go authors.  Papers are typically paid for on grant budgets (though one can ask how often young investigators shell out their own personal money to keep their careers).  Profiteering journals are proliferating to serve the CV-padding hyper-hasty bean-counting science industry that we have established.  Yet the vast majority of papers have basically no impact.  That money should go to actual research.

12.  Other ways to trim budgets without harming the science 
Budgets could be trimmed in many other ways, too:  no buying journal subscriptions on a grant (universities have subscriptions), less travel to meetings (we have Skype and Hangout!), shared costly equipment rather than a sequencer in every lab.  Grants should be smaller but of longer duration, so investigators can spend their time on research rather than hustling new grants. Junk the use of 'impact' factors and other bean-counting ways of judging faculty.  It had a point once--to reduce discrimination and be more objective, but it's long been strategized and manipulated, substituting quantity for quality.  Better evaluation means are needed.  

These suggestions are perhaps rather radical, but to the extent that they can somehow be implemented, it would have to be done humanely.  After all, people playing the game today are only doing what they were taught they must do.  Real reform is hard because science is now an entrenched part of society.  Nonetheless, a fair-minded (but determined!) phase-out of the abuses that have gradually developed would be good for science, and hence for the society that pays for it.

***NOTES:  As this was being edited, NY state has apparently just made its universities tuition-free for those whose families are not wealthy.  If true, what a step back towards sanity and public good!  The more states can get off the grant and other grant and strings-attached private donation hooks, the more independent they should be able to be.

Also, the Apr 12 Wall St Journal has a story (paywall, unless you search for it on Twitter) showing the faults of an over-stressed health research system, including some of the points made here.  The article points out problems of non-replicability and other technical mistakes that are characteristic of our heavily over-burdened system.  But it doesn't go after the System as such, the bureaucracy and wastefulness and the pressure for 'big data' studies rather than focused research, and the need to be hasty and 'productive' in order to survive.

          The (bad) luck of the draw; more evidence   
A while back, Vogelstein and Tomasetti (V-T) published a paper in Science in which it was argued that most cancers cannot be attributed to known environmental factors, but instead were due simply to the errors in DNA replication that occur throughout life when cells divide.  See our earlier 2-part series on this.

Essentially the argument is that knowledge of the approximate number of at-risk cell divisions per unit of age could account for the age-related pattern of increase in cancers of different organs, if one ignored some obviously environmental causes like smoking.  Cigarette smoke is a mutagen and if cancer is a mutagenic disease, as it certainly largely is, then that will account for the dose-related pattern of lung and oral cancers.

This got enraged responses from environmental epidemiologists whose careers are vested in the idea that if people would avoid carcinogens they'd reduce their cancer risk.  Of course, this is partly just the environmental epidemiologists' natural reaction to their ox being gored--threats to their grant largesse and so on.  But it is also true that environmental factors of various kinds, in addition to smoking, have been associated with cancer; some dietary components, viruses, sunlight, even diagnostic x-rays if done early and often enough, and other factors.

Most associated risks from agents like these are small, compared to smoking, but not zero and an at least legitimate objection to V-T's paper might be that the suggestion that environmental pollution, dietary excess, and so on don't matter when it comes to cancer is wrong.  I think V-T are saying no such thing.  Clearly some environmental exposures are mutagens and it would be a really hard-core reactionary to deny that mutations are unrelated to cancer.  Other external or lifestyle agents are mitogens; they stimulate cell division, and it would be silly not to think they could have a role in cancer.  If and when they do, it is not by causing mutations per se.  Instead mitogenic exposures in themselves just stimulate cell division, which is dangerous if the cell is already transformed into a cancer cell.  But it is also a way to increase cancer by just what V-T stress: the natural occurrence of mutations when cells divide.

There are a few who argue that cancer is due to transposable elements moving around and/or inserting into the genome where they can cause cells to misbehave, or other perhaps unknown factors such as of tissue organization, which can lead cells to 'misbehave', rather than mutations.

These alternatives are, currently, a rather minor cause of cancer.  In response to their critics, V-T have just published a new multi-national analysis that they suggest supports their theory.  They attempted to correct for the number of at-risk cells and so on, and found a convincing pattern that supports the intrinsic-mutation viewpoint.  They did this to rebut their critics.

This is at least in part an unnecessary food-fight.  When cells divide, DNA replication errors occur.  This seems well-documented (indeed, Vogelstein did some work years ago that showed evidence for somatic mutation--that is, DNA changes that are not inherited--and genomes of cancer cells compared to normal cells of the same individual.  Indeed, for decades this has been known in various levels of detail.  Of course, showing that this is causal rather than coincidental is a separate problem, because the fact of mutations occurring during cell division doesn't necessarily mean that the mutations are causal. However, for several cancers the repeated involvement of specific genes, and the demonstration of mutations in the same gene or genes in many different individuals, or of the same effect in experimental mice and so on, is persuasive evidence that mutational change is important in cancer.

The specifics of that importance are in a sense somewhat separate from the assertion that environmental epidemiologists are complaining about.  Unfortunately, to a great extent this is a silly debate. In essence, besides professional pride and careerism, the debate should not be about whether mutations are involved in cancer causation but whether specific environmental sources of mutation are identifiable and individually strong enough, as x-rays and tobacco smoke are, to be identified and avoided.  Smoking targets particular cells in the oral cavity and lungs.  But exposures that are more generic, but individually rare or not associated with a specific item like smoking, and can't be avoided, might raise the rate of somatic mutation generally.  Just having a body temperature may be one such factor, for example.

I would say that we are inevitably exposed to chemicals and so on that will potentially damage cells, mutation being one such effect.  V-T are substantially correct, from what the data look like, in saying that (in our words) namable, specific, and avoidable environmental mutations are not the major systematic, organ-targeting cause of cancer.  Vague and/or generic exposure to mutagens will lead to mutations more or less randomly among our cells (maybe, depending on the agent, differently depending on how deep in our bodies the cells are relative to the outside world or other means of exposure).  The more at-risk cells, the longer they're at risk, and so on, the greater the chance that some cell will experience a transforming set of changes.

Most of us probably inherit mutations in some of these genes from conception, and have to await other events to occur (whether these are mutational or of another nature as mentioned above).  The age patterns of cancers seem very convincingly to show that.  The real key factor here is the degree to which specific, identifiable, avoidable mutational agents can be identified.  It seems silly or, perhaps as likely, mere professional jealousy, to resist that idea.

These statements apply even if cancers are not all, or not entirely, due to mutational effects.  And, remember, not all of the mutations required to transform a cell need be of somatic origin.  Since cancer is mostly, and obviously, a multi-factor disease genetically (not a single mutation as a rule), we should not have our hackles raised if we find what seems obvious, that mutations are part of cell division, part of life.

There are curious things about cancer, such as our large body size but delayed onset ages relative to the occurrence of cancer in smaller, and younger animals like mice.  And different animals of different lifespans and body sizes, even different rodents, have different lifetime cancer risks (some may be the result of details of their inbreeding history or of inbreeding itself).  Mouse cancer rates increase with age and hence the number of at-risk cell divisions, but the overall risk at very young ages despite many fewer cell divisions (yet similar genome sizes) shows that even the spontaneous mutation idea of V-T has problems.  After all, elephants are huge and live very long lives; why don't they get cancer much earlier?

Overall, if if correct, V-T's view should not give too much comfort to our 'Precision' genomic medicine sloganeers, another aspect of budget protection, because the bad luck mutations are generally somatic, not germline, and hence not susceptible to Big Data epidemiology, genetic or otherwise, that depends on germ-line variation as the predictor.

Related to this are the numerous reports of changes in life expectancy among various segments of society and how they are changing based on behaviors, most recently, for example, the opiod epidemic among whites in depressed areas of the US.  Such environmental changes are not predictable specifically, not even in principle, and can't be built into genome-based Big Data, or the budget-promoting promises coming out of NIH about such 'precision'.  Even estimated lifetime cancer risks associated with mutations in clear-cut risk-affecting genes like BRCA1 mutations and breast cancer, vary greatly from population to population and study to study.  The V-T debate, and their obviously valid point, regardless of the details, is only part of the lifetime cancer risk story.

ADDENDUM 1
Just after posting this, I learned of a new story on this 'controversy' in The Atlantic.  It is really a silly debate, as noted in my original version.  It tacitly makes many different assumptions about whether this or that tinkering with our lifestyles will add to or reduce the risk of cancer and hence support the anti-V-T lobby.  If we're going to get into the nitty-gritty and typically very minor details about, for example, whether the statistical colon-cancer-protective effect of aspirin shows that V-T were wrong, then this really does smell of academic territory defense.

Why do I say that?  Because if we go down that road, we'll have to say that statins are cancer-causing, and so is exercise, and kidney transplants and who knows what else.  They cause cancer by allowing people to live longer, and accumulate more mutational damage to their cells.  And the supposedly serious opioid epidemic among Trump supporters actually is protective, because those people are dying earlier and not getting cancer!

The main point is that mutations are clearly involved in carcinogenesis, cell division life-history is clearly involved in carcinogenesis, environmental mutagens are clearly involved in carcinogenesis, and inherited mutations are clearly contributory to the additional effects of life-history events.  The silly extremism to which the objectors to V-T would take us would be to say that, obviously, if we avoided any interaction whatsoever with our environment, we'd never get cancer.  Of course, we'd all be so demented and immobilized with diverse organ-system failures that we wouldn't realize our good fortune in not getting cancer.

The story and much of the discussion on all sides is also rather naive even about the nature of cancer (and how many or of which mutations etc it takes to get cancer); but that's for another post sometime.

ADDENDUM 2
I'll add another new bit to my post, that I hadn't thought of when I wrote the original.  We have many ways to estimate mutation rates, in nature and in the laboratory.  They include parent-offspring comparison in genomewide sequencing samples, and there have been sperm-to-sperm comparisons.  I'm sure there are many other sets of data (see Michael Lynch in Trends in Genetics 2010 Aug; 26(8): 345–352.  These give a consistent picture and one can say, if one wants to, that the inherent mutation rate is due to identifiable environmental factors, but given the breadth of the data that's not much different than saying that mutations are 'in the air'.  There are even sex-specific differences.

The numerous mutation detection and repair mechanisms, built into genomes, adds to the idea that mutations are part of life, for example that they are not related to modern human lifestyles.  Of course, evolution depends on mutation, so it cannot and never has been reduced to zero--a species that couldn't change doesn't last.  Mutations occur in plants and animals and prokaryotes, in all environments and I believe, generally at rather similar species-specific rates.

If you want to argue that every mutation has an external (environmental) cause rather than an internal molecular one, that is merely saying there's no randomness in life or imperfection in molecular processes.  That is as much a philosophical as an empirical assertion (as perhaps any quantum physicist can tell you!).  The key, as  asserted in the post here, is that for the environmentalists' claim to make sense, to be a mutational cause in the meaningful sense, the force or factor must be systematic and identifiable and tissue-specific, and it must be shown how it gets to the internal tissue in question and not to other tissues on the way in, etc.

Given how difficult it has been to chase down most environmental carcinogenic factors, to which exposure is more than very rare, and that the search has been going on for a very long time, and only a few have been found that are, in themselves, clearly causal (ultraviolet radiation, Human Papilloma Virus, ionizing radiation, the ones mentioned in the post), whatever is left over must be very weak, non tissue-specific, rare, and the like.  Even radiation-induced lung cancer in uranium minors has been challenging to prove (for example, because miners also largely were smokers).

It is not much of a stretch to simply say that even if, in principle, all mutations in our body's lifetime were due to external exposures, and the relevant mutagens could be identified and shown in some convincing way to be specifically carcinogenic in specific tissues, in practice if not ultra-reality, then the aggregate exposures to such mutations are unavoidable and epistemically random with respect to tissue and gene.  That I would say is the essence of the V-T finding.

Quibbling about that aspect of carcinogenesis is for those who have already determined how many angels dance on the head of a pin.
          Is genetics still metaphysical? Part V. Examples of conditions that lead to transformative insights   
A commenter on this series asked what I thought that "a theory of biology should (realistically) aspire to predict?" The series (part 1 here) has discussed aspects of life sciences in which we don't currently seem to have the kind of unifying underlying theory found in other physical sciences. I'm not convinced that many people even recognize the problem.

I couched the issues in the context of asking whether the 'gene' concept was metaphysical or was more demonstrably or rigorously concrete.  I don't think it is concrete, and I do think many areas of the life sciences are based on internal generic statistical or sampling comparison of one sort of data against another (e.g., genetic variants found in cases vs controls in a search for genetic causes of disease), rather than comparing data against some prior specific theory of causation other than vacuously true assertions like 'genes may contribute to risk of disease'.  I don't think there's an obvious current answer to my view that we need a better theory of biology, nor of course that I have that answer.   

I did suggest in this series that perhaps we should not expect biology to have the same kind of theory found in physics, because our current understanding doesn't (or at least shouldn't) lead us to expect the same kind of cause-effect replicability.  Evolution--which was one of the sort of basic revolutionary insights in the history of science, and is about life, specifically asserts that life got the way it is by not being replicable (e.g., in one process, by natural selection among different--non-replicate--individuals).  But that's also a very vanilla comment.

I'll try to answer the commenter's question in this and the next post.  I'll do it in a kind of 'meta' or very generic way, through the device of presenting examples of the kind of knowledge landscape that has stimulated new, deeply synthesizing insight in various areas of science.

1.  Relativity
History generally credits Galileo for the first modern understanding that some aspects of motion appear differently from different points of view.  A classic case was of a ship gliding into the port of Genoa: if someone inside the ship dropped a ball it would land at his feet, just as it would for someone on land.  But someone on land watching the sailor through a window would see the ball move not just down but also along an angled path toward the port, the hypotenuse of a right triangle, which is longer than the straight-down distance.  But if the two observations of the same event were quantitatively different, which was 'true'?  Eventually, Einstein extended this question using images such as trains and railroad stations: a passenger who switched on two lightbulbs, one each at opposite ends of a train, would see both flashes at the same time.  But a person at a station the train was passing through would see the rearmost flash before the frontward one.  So what does this say about simultaneity?

These and many other examples showed that, unlike Isaac Newton's view of space and time as existing in an absolute sense, they depend on one's point of view, in the sense that if you adjust for that, all observers will see the same laws of Nature at work.  Einstein was working in the Swiss patent office and at the time there were problems inventors were trying to solve in keeping coordinated time--this affected European railroads, but also telecommunication, marine transport and so on. Thinking synthetically about various aspects of the problem led Einstein later to show that a similar answer applied to acceleration and a fundamentally different, viewpoint-dependent, understanding of gravity as curvature in space and time itself, a deeply powerfully deeper understanding of the inherent structure of the universe.  A relativisitic viewpoint helped account for the nature and speed of light, aspects of both motion and momentum, of electromagnetism, the relationship between matter and energy, the composition of 'space', the nature of gravity, of time and space as a unified matrix of existence, the dynamics of the cosmos, and so on, all essentially in one go.

The mathematics is very complex (and beyond my understanding!).   But the idea itself was mainly based on rather simple observations (or thought experiments), and did not require extensive data or exotically remote theory, though it has been shown to fit very diverse phenomena better than former non-relativisitc views, and are required for aspects of modern life, as well as our wish to understand the cosmos and our place in it.  That's how we should think of a unifying synthesis. 

The insight that led to relativity as a modern concept, and that there is no one 'true' viewpoint ('reference frame'), is a logically simple one, but that united many different well-known facts and observations that had not been accounted for by the same underlying aspect of Nature.

2.  Geology and Plate Techtonics (Continental Drift)
Physics is very precise from a mathematical point of view, but transformative synthesis in human thinking does not require that sort of precision.  Two evolutionary examples will show this, and that principles or 'laws' of Nature can take various forms.  

The prevailing western view until the last couple of centuries, even among scientists, was that the cosmos had a point Creation, basically in its present form, a few thousand years ago.  But the age of exploration occasioned by better seagoing technology and a spirit of global investigation, found oddities, such as sea shells at high elevations, and fossils.  The orderly geographical nature of coral atolls, Pacific island chains, volcanic and earthquake-prone regions was discovered.  Remnants of very different climates than present ones in some locations were found.  Similarly looking biological species (and fossils) were found in disjoint parts of the world, such as South Africa, South America, and eventually Antarctica.  These were given various local, ad hoc one-off explanations.  There were hints in previous work, but an influential author was Alfred Wegener who wrote (e.g., from 1912--see Wikipedia: Alfred Wegener) about the global map, showing evidence of continental drift, the continents being remnants of a separating jigsaw puzzle, as shown in the first image here; the second shows additional evidence of what were strange similarities in distantly separated lands.  This knowledge had accumulated by the many world collectors and travelers during the Age of Exploration. Better maps showed that continents seemed sometimes to be 'fitted' to each other like pieces of a jigsaw puzzle.  



Geological ages and continental movement (from Hallam, A Revolution in the Earth Sciences, 1973; see text)


Evidence for the continental jigsaw puzzle (source Wikipedia: Alfred Wegener, see text)

Also, if the world were young and static since some 'creation' event, these individual findings were hard to account for. This complemented ideas by early geologists like Hutton and Lyell around the turn of the 19th century. They noticed that deep time also was consistent with the idea of (pardon the pun) glacially slow observable changes in glaciers, river banks, and coastlines that had been documented since by geologists  Their idea of 'uniformitarianism' was that processes observable today occurred as well during the deep past, meaning that extrapolation was a valid way to make inferences.  Ad hoc isolated and unrelated explanations had generally been offered piecemeal for these sorts of facts.  Similar plants or animals on oceanically separated continents must have gotten there by rafting on detritus from rivers that had been borne to the sea.

Many very different kinds of evidence were then assembled and a profound insight was the result, which we today refer to by terms such as 'plate techtonics' or 'continental drift'.   There are now countless sources for the details, but one that I think is interesting is A Revolution in the Earth Sciences, by A. Hallam, published by Oxford Press in 1973, only a few years after what is basically the modern view had been convincingly accepted.  His account is interesting because we now know so much more that reinforces the idea, but it was as stunning a thought-change as was biological evolution in Darwin's time.  I was a graduate student at the time, and we experienced the Aha! realization that was taking place was that, before our very observational eyes so to speak, diverse facts were being fit under the same synthesizing explanation (even some of our faculty were still teaching old, forced, stable-earth explanations).

Among much else, magnetic orientation of geological formations, including symmetric stripes of magnetic reversals flanking the Mid-Atlantic Trench documented the sea-floor spreading that separated the broken-off continental fragments--the pieces of the jigsaw puzzle.  Mountain height and sea depth patterns gained new explanations on a geologic (and very deep time) scale, because the earth was accepted as being older than biblical accounts).  Atolls and the volcanic ring of fire are accounted for by continental motions.  

This was not a sudden one-factor brilliant finding, but rather the accumulation of centuries of slowly collected global data from the age of sail (corresponding to today's fervor for 'Big Data'?).  A key is that the local facts were not really accounted for by locally specific explanations, but were globally united as instances of the same general, globally underlying processes.  Coastlines, river gorges, mountain building, fossil-site locations, current evidence of very different past climates and so on were brought under the umbrella of one powerful, unifying theory.  It was the recognition of very disparate facts that could be synthesized that led to the general acceptance of the theory.  Indeed, subsequent and extensive global data, continue to this day to make the hypothesis of early advocates like Wegener pay off.

3.  Evolution itself
It is a 100% irrefutable explanation for life's diversity to say that God created all the species on Earth. But that is of no use in understanding the world, especially if we believe, as is quite obvious, that the world and the cosmos more broadly follows regular patterns or 'laws'.  Creationist views of life's diversity, of fossils, and so on, are all post hoc, special explanations for each instance. Each living species can be credited to a separate divine reason or event of creation.  But when world traveling became more common and practicable, many facts and patterns were observed that seemed to make such explanations lame and tautological at best.  For example, fossils resembled crude forms of species present today in the same area.  Groups of similar species are found living in a given region, with clusters of somewhat less similar species elsewhere. The structures of species, such as of vertebrates, or insects, showed similar organization, and one could extend this to deeper if more different patterns in other groups (e.g., that we now would call genera, phyla, and so on).  Basic aspects of inheritance seemed to apply to species, plant and animal alike.  If all species had been, say, on the same Ark, why were similar species so geographically clustered?

It dawned on investigators scanning the Victorian Age's global collections, and in particular Darwin and Wallace, that because offspring resemble their parents, though are not identical to them, and individuals and species have to feed on each other or compete for resources, that those that did better would proliferate more.  If they became isolated, they could diverge in form, and not only that but the traits of each species were suited to its circumstances, even if species fed off each other.  Over time this would also produce different, but related species in a given area.  New species were not seen directly to arise, but precedents from breeders' history showed the effects of selective reproduction, and geologists like Lyell had made biologists aware of the slow but steady nature of geological change.  If one accepted the idea that rather than the short history implied by biblical reading, life on earth instead had been here for a very long time, these otherwise very disparate facts about the nature of life and the reasons for its diversity might have a common 'uniformitarian' explanation--a real scientific explanation in terms of a shared causative process, rather than a series of unrelated creations: the synthesis of a world's worth of very diverse facts made the global pattern of life make causal and explanatory sense, in a way that it had never had before.

Of course the fact of evolution does not directly inform us about genetic causation, which has been the motivating topic of this series of posts.  We'll deal with this in our next post in the series.

Insight comes from facing a problem by synthesis related to pattern recognition
The common feature of these examples of scientific insight is that they involve synthesis derived from pattern recognition. There is a problem to be solved or something to be explained, and multiple facts that may not have seemed related and have been given local, ad hoc, one-off 'explanations'. Often the latter are forced or far-fetched, or 'lazy' (as in Creationism, because it required no understanding of the birds and the beasts). Or because the explanations are not based on any sort of real-world process, they cannot be tested and tempered, and improved.  And, unlike Creationist accounts, scientific accounts can be shown to be wrong, and hence our understanding improved.

In our examples of the conditions in which major scientific insights have occurred, someone or some few, looking at a wealth of disparate facts, or perhaps finding some new fact that is relevant to them, saw through the thicket of 'data', and found meaning.  The more a truly new idea strikes home, in each case, the more facts it incorporates, even facts not considered to be relevant.

Well!  If we don't have diverse, often seemingly disparate facts in genetics then nobody does!  But the situation now seems somewhat different from the above examples: indeed, with the precedents like those above, and several others including historic advances in chemistry, quantum physics, and astronomy, we seem to hasten to generalize, and claim our own synthesizing 'laws'.  But how well are we actually doing, and have we identified the right primary units of causation on which to do the same sort of synthesizing?  Or do we need to?

 I'll do my feeble best to offer some thoughts on this in the final part of this series.
          Digital Business Integration Manager - Big Data - Accenture - Canada   
Choose Accenture, and make delivering innovative work part of your extraordinary career. Join Accenture and help transform leading organizations and communities...
From Accenture - Tue, 13 Jun 2017 02:38:27 GMT - View all Canada jobs
          Big Data Engineer   
Best Buy Canada Ltd. (Burnaby BC): "RSS feeds and more; automating and QAing the process. You have your ear to ground on emerging Big Data technologies: Google Cloud Platform, TensorFlow, IBM Watson your educational background?...."
          Lo que aprendimos en A Coruña: conclusiones sobre la Ciencia de Datos   
TweetAcabamos de celebrar en A Coruña, en la sede de Afundación, el primer Summit sobre Data Science de la Fundación CorBI (Coruña Biomedical Research Institute), con el que CorBI Foundation pretendía abrir un foro de discusión en torno a la relevancia del tratamiento del Big Data en los campos de la neurociencia y el cambio [...]
          Challenges and Opportunities with Big Data 2011-1   

The promise of data-driven decision-making is now being recognized broadly, and there is growing enthusiasm for the notion of ``Big Data.’’ While the promise of Big Data is real -- for example, it is estimated that Google alone contributed 54 billion dollars to the US economy in 2009 -- there is currently a wide gap between its potential and its realization.
Heterogeneity, scale, timeliness, complexity, and privacy problems with Big Data impede progress at all phases of the pipeline that can create value from data. The problems start right away during data acquisition, when the data tsunami requires us to make decisions, currently in an ad hoc manner, about what data to keep and what to discard, and how to store what we keep reliably with the right metadata. Much data today is not natively in structured format; for example, tweets and blogs are weakly structured pieces of text, while images and video are structured for storage and display, but not for semantic content and search: transforming such content into a structured format for later analysis is a major challenge. The value of data explodes when it can be linked with other data, thus data integration is a major creator of value. Since most data is directly generated in digital format today, we have the opportunity and the challenge both to influence the creation to facilitate later linkage and to automatically link previously created data. Data analysis, organization, retrieval, and modeling are other foundational challenges. Data analysis is a clear bottleneck in many applications, both due to lack of scalability of the underlying algorithms and due to the complexity of the data that needs to be analyzed. Finally, presentation of the results and its interpretation by non-technical domain experts is crucial to extracting actionable knowledge.
During the last 35 years, data management principles such as physical and logical independence, declarative querying and cost-based optimization have led, during the last 35 years, to a multi-billion dollar industry. More importantly, these technical advances have enabled the first round of business intelligence applications and laid the foundation for managing and analyzing Big Data today. The many novel challenges and opportunities associated with Big Data necessitate rethinking many aspects of these data management platforms, while retaining other desirable aspects. We believe that appropriate investment in Big Data will lead to a new wave of fundamental technological advances that will be embodied in the next generations of Big Data management and analysis platforms, products, and systems.
We believe that these research problems are not only timely, but also have the potential to create huge economic value in the US economy for years to come. However, they are also hard, requiring us to rethink data analysis systems in fundamental ways. A major investment in Big Data, properly directed, can result not only in major scientific advances, but also lay the foundation for the next generation of advances in science, medicine, and business.


          Making Data Governance "Agile" Again   

By Dennis D. McDonald

Introduction

In Can data governance be agile? Nancy Couture identifies the most important first step in establishing an "agile" data governance program:

An alternative, and more agile approach, is to identify smaller data governance initiatives based on strategic projects or business needs, and build from there. In this way, the organization can keep everyone apprised of progress and decisions, but the work effort is limited and focused. And the planned business value can be realized more quickly, thus increasing interest. 

The First Step

The importance of this first step, especially the part about "... based on strategic projects or business needs," shouldn't be underestimated.

Since beginning my own research into big data project management over a year ago I realized that data governance shares a lot with many other organizational priorities that generate a call for a comprehensive and strategic approach. Data when looked at as a resource permeates the organization. There's a tendency when examining how best to manage data comprehensively to suggest a wide-ranging solution that might be needed but that unfortunately might be doomed to failure.

No More Ocean Boiling

The prospect of failure exists partly because the day of the "boil the ocean" approach to massive enterprise level projects is long gone, especially in organizations that are legacy-system dependent and heavily siloed. Managing comprehensive programs that involve significant changes to technology, culture, and business processes requires a great deal of support and management skill, time, attention -- and money.

Also needed is the realization that change is inevitable and plans that attempt to plan too much inevitably have to be changed -- this is one of the reasons for the popularity of the "agile" movement which focuses on delivering value more quickly in manageable chunks.

Needed: Both Tactics and Strategy

As Couture suggests, it makes more sense to start off with something that's (a) important and (b) definable.

This doesn't mean you shouldn't think strategically when faced with "upping" your data governance game. When thinking about data governance it pays to think both strategically and tactically. We want to deliver value in the short term while at the same time providing a foundation for the future. But how?

An Approach

The following is based on ongoing discussions with consulting colleagues William Moore and Mark Bruscke. Will has evolved a structured approach to data governance that emphasizes how language and terminology are used. Mark is an experienced data architect with much enterprise level experience. My own focus is on IT strategy and project management especially in data intensive projects.  

What follows are some of the actions we recommend to address both the tactics and strategy associated with improved data governance:

  1. Establish Data Governance Management Structure
  2. Understand Language and Terminology
  3. Manage Metadata
  4. Perform Data Stewardship

1. Establish Data Governance Management Structure

Even for "tactical" projects focusing on data governance applied to selected high-importance problems or issues, key stakeholders must be represented including not only IT but potentially all groups impacted by how data are managed. This might include groups as diverse as Legal, Marketing, Customer Service, Research, Data Science, Sales, and Administration. Any group that might somehow be directly or indirectly associated with producing, managing, or using data should be considered for some sort of stakeholder role.

Sometimes management of the data governance process is formalized in a "data governance council" which oversee policies and practices regarding data and metadata. This may not be appropriate for short term or tactically focused projects managed following an agile process. Eventually, though, such "council" will have to set policy and even resolve disputes when different groups or systems use different terminology to refer to what appears to be the same concept. Starting such a group  requires settling on a data governance council structure (e.g., centralized, federated, hub-and-spoke, etc.), establishing and sharing a group charter, recruiting members, and overseeing the organized data governance operation.  

Initially a "data advisory council" should include, even in a tactically focused short term project, those who are best able to articulate requirements and user stories concerning data associated with the problem or issue being targeted.

Given the potential need for experimentation at this stage, the team must be ready to change as requirements (and user stories) evolve. Flexibility and collaboration are needed. The more focused the initial targets are, the better able the team will be to share information and make decisions.

2. Understand Language and Terminology

Whether dealing with a short term tactically-focused project or a longer term project with strategic implications for how the organization manages and uses data, participants need to understand how language and terminology are managed and used in relation to the specific problems or issues being addressed.

A structured and open process will be needed to identify and document the most important concepts that need to be understood and documented, regardless of the variety of ways people and machines refer to these concepts.

Moore calls these “key business attributes." The process of discovering, defining, and modeling them is "business attribute analysis."

The basic approach for identifying key business attributes involves not only reviewing how data and metadata are currently defined but also analyzing how important business concepts are referred to in the organization's documentation, emails, reports, and other communication media. This is one of the reasons why initial steps need to focus on well define projects or issues: scope control.

One output of this collaborative business attribute analysis, carried out with support of the data governance team, is a high level model of key business attributes and how they relate to the problem or issue that's initially being targeted. Two important questions to address during this process are:

  1. What do we know now about solving this problem or making this decision?
  2. What do we need to know to solve this problem or make this decision?

The business attribute model that emerges from this analysis will surely evolve but will establish the basis for the next step by identifying what data need to be managed.

3. Manage Metadata

Our goal here is to create a metadata repository that catalogs key data concepts (see above) and how these are expressed and used throughout the organization’s business processes, systems, and databases.

For a particular problem domain defined by the process or issue being addressed, we create an inventory and catalog of target data related to the key business attributes identified above. We document where and how these data are used and who is responsible for them.

Our basic approach is to review all relevant business attributes, data, definitions, decision rules, data models, and the manner in which data are linked, expressed, and used throughout the targeted problem domain.

Process and information are what are important here, not the tools used. We recommend in order to proceed quickly at this stage that existing collaboration, database, and document management tools be used. Accessibility and ease of use are major concerns. We want to move quickly and deliberately but not in secret.

The primary deliverable of this process is an evolving catalog that supports data governance, consistency, data transformation requirements, and (where necessary and justified) intelligent standardization.

Most of all we want to clearly and transparently document how data are used in machine to machine, machine to human, and human to human communication. And we want the systems and processes used here to be reused and where possible scaleable.

4. Perform Data Stewardship

We need to establish a dedicated and staffed data stewardship process that works across and supports the systems and processes described in (1), (2), and (3).

The Data Steward provides the "boots on the ground" in the data governance process. Our role as consultants is to help establish the above systems and processes and to function initially as the client's data stewards while simultaneously mentoring others so the client can manage its own data governance  and stewardship processes and system

Once a specific problem or issue is addressed and improvements identified for how data and metadata should be governed and used to address that problem, the processes and procedures we have gone through are represented as documented processes that can then be adapted for the next problem.

Conclusions

In summary we recommend:

  1. Start with a well defined data-dependent problem or issue.
  2. Move quickly but stay disciplined.
  3. Be collaborative and transparent.
  4. Treat this as a learning process; knowing in advance what benefits better data and analytics will bring is impossible.
  5. Keep track of costs.
  6. Don't just focus on existing technology and structured data.
  7. Keep management informed and involved.
  8. Document what can be done better next time.
  9. Focus on detail but keep the big picture in mind.
  10. Use collaboration and transparency to overcome resistance, not hierarchy and authority.

Copyright (c) 2017 by Dennis D. McDonald. Interested in applying these ideas to your own organization's data governance? Contact me in Alexandria Virginia at 703-402-7382 or by email at ddmcd@ddmcd.com.

 

 

 

 

 

 


          Data Scientist - Data Insights & Analytics - 99 TAXIS - São Paulo, SP   
Has been worked with big data technologies like Storm, Samza, Hadoop, Presto, Airflow; 99's mission is to make transportation cheaper, faster and more efficient...
De 99 TAXIS - Sat, 17 Jun 2017 08:59:04 GMT - Visualizar todas as empregos: São Paulo, SP
          Data Scientist - Growth - 99 TAXIS - São Paulo, SP   
Has been worked with big data technologies like Storm, Samza, Hadoop, Presto, Airflow; 99's mission is to make transportation cheaper, faster and more efficient...
De 99 TAXIS - Sat, 10 Jun 2017 05:51:08 GMT - Visualizar todas as empregos: São Paulo, SP
          Data Scientist - RA - CSI - 99 TAXIS - São Paulo, SP   
Has been worked with big data technologies like Storm, Samza, Hadoop, Presto, Airflow; 99's mission is to make transportation cheaper, faster and more efficient...
De 99 TAXIS - Sat, 10 Jun 2017 05:51:08 GMT - Visualizar todas as empregos: São Paulo, SP
          Data Engineer - 99 TAXIS - São Paulo, SP   
Has been worked with big data technologies like Storm, Samza, Hadoop, Presto, Airflow; 99's mission is to make transportation cheaper, faster and more efficient...
De 99 TAXIS - Sat, 10 Jun 2017 05:51:07 GMT - Visualizar todas as empregos: São Paulo, SP
          Data Engineer - Growth - 99 TAXIS - São Paulo, SP   
Have been worked with big data technologies like Storm, Samza, Hadoop, Presto, Airflow; 99's mission is to make transportation cheaper, faster and more...
De 99 TAXIS - Sat, 10 Jun 2017 05:51:06 GMT - Visualizar todas as empregos: São Paulo, SP
          Data Scientist - Mkt Place Routing - 99 TAXIS - São Paulo, SP   
Has been worked with big data technologies like Storm, Samza, Hadoop, Presto, Airflow; 99's mission is to make transportation cheaper, faster and more efficient...
De 99 TAXIS - Sat, 10 Jun 2017 05:51:05 GMT - Visualizar todas as empregos: São Paulo, SP
          CZ Podcast 167 - Machine learning startups   
At the episode 167. we interviewed Bradford Cross (Google, Flightcaster, Prismatic) who leads machine learning and big data venture capital fund in Central Europe. We have been discussing various topics - Scala vs. Clojure, Prismatic acquistion by LinkedIn, machine learning at scale etc. One of the key topics was about a culture in startups...
          jóvenes aragoneses se forman con telefónica   

Dos equipos de jóvenes zaragozanos presentaron ayer sus proyectos de empleo digital, fruto de la formación que han recibido en big data y programación de videojuegos en el marco del convenio que Fundación Telefónica y la Consejería de Economía, ...

          Software Engineering Manager - DataDirect Networks - India   
DataDirect Networks (DDN.com) is a world leader in massively-scalable storage platforms and solutions engineered for the Big Data and Cloud era. Our storage,
From DataDirect Networks - Wed, 28 Jun 2017 08:29:53 GMT - View all India jobs
          Staff Engineer, R&D Test Engineering Job - SanDisk - Milpitas, CA   
From mobile devices to hyperscale data centers, SanDisk storage solutions make the incredible possible. Hadoop big data database management and scripting in...
From SanDisk - Sat, 06 May 2017 05:16:29 GMT - View all Milpitas, CA jobs
          Big Data in China's Smart City Market Analysis and Forecast    
(EMAILWIRE.COM, June 30, 2017 ) Smart cities have grown vigorously in China because of the government's policy support and their potential to alleviate pressing problems encountered by China along the way of rapid economic and urban development, including declining labor force, ongoing urbanization,...
          Big Data Developer - Intelligent Solutions - JP Morgan Chase - New York, NY   
Experience with machine learning is preferred. Knowledge of industry leading Business Intelligence tools. Consistently evaluates decisions in terms of impact to...
From JPMorgan Chase - Tue, 09 May 2017 10:34:19 GMT - View all New York, NY jobs
          Senior Manager – Big Data Engineering - Capital Markets Placement - New York, NY   
We’re looking for someone who welcomes challenges and is hyper-focused on delivering exceptional results to internal business customers while creating a...
From CMP.jobs - Mon, 01 May 2017 09:22:09 GMT - View all New York, NY jobs
          Amazon Eager for Whole Foods Data — Associated Press   
If the Amazon takeover of Whole Foods happens, Amazon gets a lot more than stores and warehouses: It also gets data. Lots and lots of data.
"To be sure, there are plenty of other benefits to the combination. Amazon will derive steady revenue from more than 460 Whole Foods stores; it can also introduce robots and other automation technologies to cut costs and improve the bottom line. But ultimately, Amazon wants to sell even more goods and services to both online and offline shoppers — including stuff they might not even realize they need."

Supplier Takeaway

Big data is a big asset. How is your business making use of data?  

Resources for Walmart Suppliers

How to Prepare for Retail-Ready Packaging Courses for Walmart Suppliers Podcast: Amazon…Whole Foods…Oh, Yeah, We’re Milking It

          Shining Women Are People Too   
In this age of big data and Amazon.com, it might be easy to think companies, especially our employers, have our best interests at heart. They know exactly what we want, before we want it. But as disasters such as the Deepwater Horizon explosion and leak show us, sometimes money comes before everything, even worker's lives. […]
          Social Media, Big Data and Libraries. The Next Step   

35th ADLUG Annual Meeting 2016 hosted by the University of the Basque Country (September 2016).
          Top Ten #ddj: This Week’s Top Data Journalism   
What’s the global #ddj community tweeting about? Our NodeXL mapping from June 19 to 25 includes Germany's housing discrimination problem by @SPIEGELONLINE and @br_data, data on police interaction with the public from @StanfordEng and @StanfordJourn and a report on big data for gender from @Data2X.
          Microsoft quer ajudar a resolver desafios da indústria nacional com o Innovation Challenge   

Quais são os maiores desafios que se atravessam, nos dias de hoje, no caminho de algumas das principais indústrias que compõem a Economia nacional? Como poderão ser usadas tecnologias como internet of things (IoT), inteligência artificial, big data, machine learning ou realidade aumentada para resolver estes desafios, melhorar a experiência dos consumidores e aumentar a eficiência das empresas?


          Episode 319 – Psychographics 101   

[audio mp3="https://www.corbettreport.com/mp3/episode319-lq.mp3"][/audio]
What do you get when you combine behavioural science with big data and use the new Frankenstein hybrid to better influence people's thoughts, opinions and desires? Why, psychographics of course! Join James today as he delves into the murky world of billionaire hedge fund owners, creepy thought manipulators and the Trump campaign.

          Account Executive Digital Services - mimacom - Cornelius, NC   
Good knowledge of MS-Office and Salesforce. Modern technologies and products in challenging fields such as Digital Transformation, Big Data and Cloud Computing....
From mimacom - Fri, 17 Mar 2017 22:48:54 GMT - View all Cornelius, NC jobs
          Hadoop Admin   
NY-jericho, Duration: Long Term Mode of Interview: Telephonic/skype + In person Currently looking for an energetic, high-performing Senior Hadoop Administrator with extensive hands on experience around Hadoop based Big Data solutions This position is responsible for delivering the systems infrastructure solutions of assigned big data application; identifying and documenting big data use case requirements; lea
          Web-Scale Converged Infrastructure Designed to Simplify IT   
Download this resource to key into the specs of one hyper-converged infrastructure, and discover why its implementation is ideal for supporting multiple, virtual workloads such as VDI, private cloud, database, OLTP and data warehouse as well as virtualized big data deployments. Published by: Dell EMC and Nutanix
          Red Hat Solutions Architect - MOBIA Technology Innovations - Canada   
Telecom Infrastructure, Converged Infrastructure, Cloud, Security and Big Data. Red Hat Solutions Architect....
From MOBIA Technology Innovations - Mon, 26 Jun 2017 13:15:01 GMT - View all Canada jobs
          IT Market Analyst covering Big Data & Analytics Technologies - (Milford)   
Job Description Reports To: Vice President, Research & Analyst Services Job Summary: ESG is seeking an experienced Analyst to join its Data Management & Analytics team. ESG Analysts gain a unique view of the technology industry by interacting with all constituencies across the IT ecosystem, including vendors, end-users, financial analysts, media, and channel partners. This unique perspective enables ESG Analysts to provide clients with sound, insightful intelligence to inform their most critical business decisions and guide their key initiatives.
          Principal Data Scientist - (Wellesley Hills)   
Working At Aetna the Value To You What does it mean to work at Aetna A lot From programs and benefits that support your financial physical and emotional health to opportunities to build your knowledge and expand your career the company makes working here a valuable experience in many ways Aetna s Data Analytics team is focused on delivering strategically impactful products to our internal customers by building analytically based solutions that integrate a wide range of internal and external large datasets within a cutting edge Hadoop parallel processing environment We are currently seeking a Principal Data Scientist in our Hartford CT Wellesley MA or New York NY location This position will be responsible for leveraging advanced statistical predictive modeling to evaluate scenarios and make predictions on future outcomes Analyzes very large data sets in real time databases and develops and implements mathematical approaches Position Summary This is a unique opportunity to develop new approaches leveraging the latest cutting edge big data technologies The successful candidate will provide strategic leadership for the development validation and delivery of algorithms statistical models and reporting tools Acts as the analytic team lead for highly complex projects involving multiple resources and tasks providing individual mentoring in support of company objectives Fundamental Components Leads development and execution of new and or highly complex algorithms and statistical predictive models and determines analytical approaches and modeling techniques to evaluate potential future outcomes Establishes analytical rigor and statistical methods to analyze large amounts of data using advanced statistical techniques and mathematical analyses Methods will be implemented in Hadoop and R using advanced technologies Manage highly complex analytical projects from data exploration model building performance evaluation testing Applies in depth knowledge of systems and products to consult and advise on additional efforts across organization enterprise Motivates team members and probes into technical details and mentors others to do the same Provides thought leadership and direction for analytic solutions tools and studies Anticipates and solves strategic and high risk business problems with broad impact on the business area by applying leading edge theories and techniques to investigate problems detect patterns and recommend solutions Provides guidance to develop enterprise wide analytics strategy and roadmap Interacts with internal and external peers and management to share highly complex information solutions related to areas of expertise and or to gain acceptance of new or enhanced technology business solutions Required Skills Background Experience years of progressively complex related experience Technical background with modeling and programming Experience in SAS or SQL or other programming languages Advanced in depth specialization in mathematical analysis methods predictive modeling statistical analyses machine learning and big data technologies such as Python R and Hadoop Demonstrated ability to communicate technical ideas and results to non technical clients in written and verbal form Comprehensive knowledge on health care industry products systems business strategies and products and e xperience in healthcare industry is preferred Strong organizational management and leadership skills Education Masters degree Ph D preferred Additional Job Information Aetna continues to build a world class Data Science organization to capture data understand context generate insights and react in real time We engage our business partners providing solutions to improve the consumer experience increase efficiencies and optimize health outcomes for our members through leveraging cutting edge technology Aetna is about more than just doing a job This is our opportunity to re shape healthcare for America and across the globe We are developing solutions to improve the quality and affordability of healthcare What we do will benefit generations to come We care about each other our customers and our communities We are inspired to make a difference and we are committed to integrity and excellence Together we will empower people to live healthier lives Aetna is an equal opportunity affirmative action employer All qualified applicants will receive consideration for employment regardless of personal characteristics or status We take affirmative action to recruit select and develop women people of color veterans and individuals with disabilities We are a company built on excellence We have a culture that values growth achievement and diversity and a workplace where your voice can be heard How to apply Please use the Apply link to apply to this position If you cannot view this link you can find this position on our website by visiting www aetna com working click on apply online search openings and enter BR in the keyword field Additional information on what its like to work for Aetna and more can also be found on our website We value leadership creativity and initiative If you share those values and a commitment to excellence and innovation consider a career with Aetna Aetna does not permit the use of tobacco related products or drugs in the workplace Aetna is an EO AA Employer Minorities Women Veterans Disability No search firms please You will not be asked for personal information until you have been fully evaluated through Aetna s screening processes Aetna will never ask applicants for money If you want to verify the identity of someone who contacts you from Aetna call AETNAHR Source: http://www.juju.com/jad/000000009qxk2z?partnerid=af0e5911314cbc501beebaca7889739d&exported=True&hosted_timestamp=0042a345f27ac5dc0413802e189be385daf54a16310431f6ff8f92f7af39df48
          Software QA Engineer - (Cambridge)   
SummaryGCP Applied Technologies Inc. is a $1.5 billion, global provider of products and technology solutions for customers in the specialty construction chemicals, specialty building materials, and packaging sealants and coatings industries.Based in GCP's headquarters in Cambridge, MA, the applications development team is a small, motivated team of top-notch software developers and engineers working with the latest tools in an agile, start-up-like environment. We work on a variety of high-visibility technology projects that are central to the company's strategy, including Internet-of-Things (IoT), cloud applications, mobile applications, big data, machine learning and predictive analytics.Job Description:Ideal candidates will enjoy working in a fast-paced environment that values collaboration and open teamwork and working on a variety of software projects.In an Agile environment, create and execute test cases and test plans across the application portfolioIsolate, document and track defects and take hands-on ownership of issues through their resolutionDesign and implement test tools and scripts covering regression, performance and usabilityEstablish QA metrics and prepare daily/weekly test reportsWork with development team to coordinate database and application deploymentsComfortable working in both Linux and Windows environments and has a strong understanding of the various parts of the technology stack to effectively isolate issues Source: http://www.juju.com/jad/000000009qidh9?partnerid=af0e5911314cbc501beebaca7889739d&exported=True&hosted_timestamp=0042a345f27ac5dc0413802e189be385daf54a16310431f6ff8f92f7af39df48
          Accelerating Big Data Processing with Hadoop, Spark and Memcached   
In this video from the 2015 Stanford HPC Conference, DK Panda from Ohio State University presents: Accelerating Big Data Processing with Hadoop, Spark and Memcached.
          Performance Optimization of Hadoop Using InfiniBand RDMA   
"The Hadoop framework has become the most popular open-source solution for Big Data processing. Traditionally, Hadoop communication calls are implemented over sockets and do not deliver best performance on modern clusters with high-performance interconnects. This talk will examine opportunities and challenges in optimizing performance of Hadoop with Remote DMA (RDMA) support, as available with InfiniBand, RoCE (RDMA over Converged Enhanced Ethernet) and other modern interconnects."
          DK Panda Presents: Big Data – Hadoop and Memcached   
DK Panda from Ohio State University presented this talk at the Stanford HPC & Exascale Conference. "As InfiniBand is getting used in scientific computing environments, there is a big demand to harness its benefits for enterprise environments for handling big data and analytics. This talk will focus on high-performance and scalable designs of Hadoop using native RDMA support of InfiniBand and RoCE."
          Offer - Urgent Java Architect for Trivandrum - INDIA   
VINIRMA Consulting Pvt. Ltd. is a 360-degree Human Resource Management Consulting and Staffing Services Organization with operations in UAE, Qatar, Bahrain, Australia, USA, Singapore & India. VINIRMA Consulting is currently looking for Java Architect for one of our clients which is a leading Organization in Trivandrum with the following skill set and terms and conditions. Skillset required: Should have at least 6 years of experience in Java technologies. Should have passion to perform technology researches and to pursue his career in technology stream. Following skills are mandatory. Object oriented programming concepts. Strong knowledge in Core Java and Server Side Java programming Java technologies such as Spring and Hibernate Knowledge on any one Java MVC framework Following skills are desirable. Knowledge on Web framework Knowledge in Maven, Camel, Cache libraries Working experience in No SQL and Big Data technologies Exposure to Mobile Apps, IoT and Wearables Working experience in Linux and AWS environments UML Job Description: To be a part of the Java center of excellence Involve in Java component development and framework development Perform researches on Java technologies. Provide consultancy to various projects on Java technologies. Experience required: 6-10 Years Terms and conditions: Joining time frame: 2 weeks (maximum 1 month). The selected candidates shall be a direct employee of one of the leading organizations in Trivandrum. Should you be interested in this opportunity, please send your latest resume in MS Word format at the earliest at sreejith.murali@vamsystems.com or call +91 471 2766011.
          Big Data Solution Architect - GlaxoSmithKline - Upper Providence, PA   
Proven track record working in industrial scale relational Oracle and SQL Server databases in a data warehousing setting such as healthcare, banking and finance...
From GlaxoSmithKline - Sun, 11 Jun 2017 06:59:54 GMT - View all Upper Providence, PA jobs
          Tirer parti du Big Data en jouant la carte du collaboratif – Par Par Christophe SUFFYS, Bittle   

Incontestablement, le Big data se positionne comme une véritable révolution technologique et devrait continuer d’occuper une place centrale ces prochaines années. Selon MarketsandMarkets, le marché du Big Data mondial devrait atteindre une valeur de 66,79 milliards de dollars en 2021. Pour autant, le Big data reste une mouvance assez jeune et impose l’utilisation de nouveaux […]

Cet article Tirer parti du Big Data en jouant la carte du collaboratif – Par Par Christophe SUFFYS, Bittle est apparu en premier sur Docaufutur.


          $21.5 Million Big Data Project Aims to Improve Medical Care   
Three Major Cleveland Healthcare Institutions are Teaming Up to Share Data Every patient and every visit to the doctor’s office adds, in some small way, to a doctor’s medical knowledge.…
          Ag Futures: Ahead of Big Data Day Some Exuberance in Grains   
none
          Comment on The Big Data Continuum: From Data Scientists to Empowered Business People by Goodbye Don Draper, Hello Big Data: An EMA Report on Modern Analytics   
[…] and be understood by line-of-business professionals, not just data scientists. (See more on this subject from RTInsight’s expert Connie […]
          Internship - Artificial Intelligence, Big Data & Rapid Prototyping - Société Générale - Montréal, QC   
SGCIB has positioned itself as a leader in interest rate and currency derivatives on the Canadian market. Société Générale Corporate &amp; Investment Banking (SGCIB... $18 - $21 an hour
From Société Générale - Fri, 30 Jun 2017 00:06:44 GMT - View all Montréal, QC jobs
          Be patient!   

StockPulse analyzes communication on financial topics in publicly accessible social media and news sources. Big Data Analytics for Financial Markets

Der Beitrag Be patient! erschien zuerst auf Startup Magazine.


          18 Paying Markets for Tech Articles   
The whole world is looking for techies, which means if your area of expertise is web development, website design, and/or all those things with confusing initials, you can write about it and make money!

Here is a list of publications that are hungry for your knowledge, and quite willing to offer you a decent amount of money for your articles, blog posts, and tutorials.

For more paying markets go HERE.
_______________________

A List Apart explores the design, development, and meaning of web content, with a special focus on web standards and best practices. Length: 1,500 - 2,000 words. Payment: $200 per article. Read their submission guidelines.

SitePoint publishes articles about HTML, CSS, and Sass. Payment: $150 for articles and $200 for tutorials. A tutorial is generally any in-depth article that has either a demo or code download link or that is very code-heavy in general, even if it doesn’t have an actual demo. Payment: $300 or more for articles and tutorials that are lengthier. Read their submission guidelines.

Word Candy provides content on a variety of topics relating to WordPress, online marketing and entrepreneurship. Payment: 6 cents/word. Read their submission guidelines.

The Layout features how-to articles on WordPress geared to business. They are looking for article from experts in the field, whether you're a designer, developer, or just a knowledgeable writer. Length: 700 - 1,200 words. Payment: Up to $150. Read their submission guidelines.

Tutorials Point publishes all kinds of tech-related tutorials. They are specifically looking for people having sufficient domain knowledge in the following areas: Information Technology, Software Quality management, Java technologies, Mainframe technologies, Web development technologies, Project Management, Accounting and Finance, Telecommunication, Big Data, Microsoft Technologies, Business Intelligence, SAP Modules, Open Sources, Soft Skills, Academic Subjects from Engineering and Management Syllabus. Payment: $250 - $500. Read their submission guidelines.

WPHUB - "all things WordPress." WPHUB focuses on the WordPress development community, specifically toward theme developers, plugin authors and customization specialists. They are looking for writers with some development background. Payment: $100 - $200 per article. Read their  submission guidelines.

PhotoshopTutorials.ws publishes articles and tutorials on Photoshop. Payment: $25 - $50 for articles, $50 for quick tips, and $150 - $300 for full tutorials. Read their submission guidelines.

Vector Diary publishes tutorials about Illustrator. "If you have anything interesting and new to share about illustrator, you are welcomed to write for Vectordiary. It can be a technique you have for your projects or it can be a step by step to draw an illustration. Anything readers are keen to know can be submitted." Payment: $150. Read their submission guidelines.

Linode includes a community of authors who contribute to Linode Guides and Tutorials. "We are always looking for guides on popular, trending topics, and updates to existing guides." Payment: $250. Read their submission guidelines.

The Write Stuff publishes articles about database development and management. Payment: $200 in cash and $200 in Compose database credits. Read their submission guidelines.

Indeni publishes articles about IT operations. "If you’re really good with firewalls, load balancers, routers, switches or severs, we’d like to work with you." Payment: $200. Read their submission guidelines.

The Graphic Design School Blog is looking for writers skilled enough with software to write a beginner tutorial in either Photoshop, Illustrator, InDesign, or open source design or utility software for designers. Payment: $100 - $200. Read their submission guidelines.

AppStorm caters to software users. Payment: Around $60, but the rate varies depending upon the type of article you choose to write. Read their Writer’s Guide before you submit your pitch.

Make Tech Easier is a tech tutorial site that teaches people the easier way to handle complicated tech. "We cover tutorials for various operating systems such as Windows, Mac and Linux, Mobile OS (iOS and Android), popular web app like Browsers (Firefox/Chrome), WordPress, and gadgets reviews. We are always looking for more writers to help us turn this site into something bigger and better." Payment: Amount not specified. Read their submission guidelines.

WorldStart is looking tips for their e-mail newsletter, WorldStart’s Computer Tips. This is published daily to 300,000 readers and focuses on tips and tricks the average computer user can utilize. "We are also seeking feature articles for our website covering any and all aspects of computing." Payment: $15 - $35. Read their submission guidelines.

Labmice is a site for serious techies. They are looking for Field Notes, Best Practices, lessons learned, white papers, written material, guidelines, how-to's, technical explanations, etc. about almost any It topic including Windows 2000 Administration, Computer Security, Technical Project Management, etc. "Obviously we want "real world" documents, and not things that are easily found in any textbooks. You must be the original author/creator of the document and most declare that all of the content of the submitted work is original, unless referenced with permission from the original author." Length: 1,000 to 1,500 words. Payment: Negotiated. Read their submission guidelines.

Tutorial Board is looking for tutorials in graphics by writers who are skilled with Adobe Photoshop, Adobe After Effect, Autodesk Maya or any other industry standard CG software.  Payment: Up to $150 p/tutorial. Read their submission guidelines.

Smashing Magazine publishes articles about Web development and design. "We aim for exciting, creative articles that also cover recent developments within the industry. Writing does take time, but substance is more important than length." Payment: Not Specified. Read their submission guidelines.


          Comment on Doctor Who 806 “The Caretaker” – Reviews round-up by gjtrebgn   
At the recent STEP 2015 Conference, Amir Farha, BECO?? co-founder and managing partner, cautioned the regional venture capital industry of losing their next unicorns if they didn?? support their winners properly and if they didn?? time their exits correctly. <a href="http://www.coachoutlet.cc/" / rel="nofollow">coach outlet online</a> Fax <b>972-3-7255730</b> e <a href="http://www.louisvuittonbags.name/" / rel="nofollow">louis vuitton bags</a> ??ll the bar staff are volunteers,??said Mrs Kitching. <a href="http://www.hollisterclothingstore.us.com/" / rel="nofollow">hollister clothing store</a> Directors Mark Burton and Richard Starzack shepherd this boisterous romp through various twists and turns at a breathless pace. David Silverman is a thirtysomething Canadian-born musician, lawyer, businessman, and entrepreneur. As the cofounder of Clammr, he wants you to use those four hours of your day when you cannot consume visual media wisely by getting the most out of audio. <a href="http://www.burberryoutlet.us.org/" / rel="nofollow">burberry outlet online</a> According to the Investment Center at the Ministry of Economy, employers e <a href="http://www.abercrombie.us.org/" / rel="nofollow">abercrombie</a> I'm surprised all the Labour councillors for deprived areas in the eastern side of the County are so happy with a plan than focuses development on Liberal Democrat voting city, you'd think they would want their electors to directly benefit from the new jobs & homes not have to move. <a href="http://www.abercrombie.us.org/" / rel="nofollow">abercrombie</a> The talking point of the men s team before Kazan was the comeback of Grant Hackett, but in the wash-up of the competition it became clear the next generation is ready to write its own story. e Juice of 1/2 lemon <a href="http://www.burberryoutlet.us.org/" / rel="nofollow">burberry outlet</a> <a href="/subscribe.php3" rel="nofollow"><img src="/image007.gif" <a href="http://www.af.net.co/" / rel="nofollow">http://www.af.net.co</a> Sgt Cowan said: ??t is very possible that a dog inflicted the injuries. I would urge owners to keep their dogs on leads around farming fields and ensure they have control of them at all times.??<a href="http://www.polo.us.org/" / rel="nofollow">polo ralph lauren outlet online</a> Walker's campaign team spent the bulk of 2013 gearing up for the 2014 run. The 2012 presidential race showed Democrats were ahead of the GOP in applying big data and technology to politics. http://www.moncleroutlet.us.org "Your move, chief," refers to the sarcastic nickname that Williams' character, psychiatrist Sean Maguire, uses on the brilliant yet troubled janitor played by Damon. <a href="http://www.toryburchs.us.org/" / rel="nofollow">tory burch</a> MIAMI - Astronauts living at the International Space Station are about to take their first bites of space-grown lettuce, in what scientists described as another step toward enabling human missions to Mars. s <a href="http://www.coachoutlet.us.org/" / rel="nofollow">coach factory outlet</a> It has been proven that young people who have contact with industry while at school are up to five times less likely to be unemployed at the age of 25. <a href="http://www.truereligionoutlet.mobi/" / rel="nofollow">true religion outlet</a> These trials include a visually stunning race against time to rescue Tris' mother (Ashley Judd) from a burning building that rotates as it ascends to the heavens and fisticuffs between the heroine and her diabolical doppelganger. Having tied the knot (actually, the many knots of the girdle, according to the Romans), the groom spirits the bride away into hiding for the honeymonth. By the time she is found by her family, she should be happily pregnant, thereby ensuring that the marriage will go on. <a href="http://www.lv.net.co/" / rel="nofollow">www.lv.net.co</a> Court records say that an undercover agent from the Bureau of Alcohol, Tobacco, Firearms and Explosives purchased drugs from Palmisano five times between April 24 and July 18 of 2014 as part of an investigation. demands higher. But the situation on the Iranian side changed too, since a <a href="http://www.coachfactory.us.org/" / rel="nofollow">coach outlet</a> Fan said the victory in the anti-Japanese war and World Anti-Fascist War, as well as the post-WWII order based on the UN charter, should be "firmly guarded," and any distortions of history should be absolutely forbidden. It's still with us, Charlie Partridge, UW's defensive line coach and co-defensive coordinator, said. I promise you. It is very much so. <a href="http://www.abercrombiekids.us.com/" / rel="nofollow">www.abercrombiekids.us.com</a> Moreover, the lack of private and secure sanitation facilities undermines the health, welfare and security of girls and women in congested living spaces, the update noted. u Plant-based: There are a number of plant based chemicals that can offer some protection against mosquito bites. They are not as effective as DEET and are not recommended as the only protection in areas that are endemic to malaria. These include citronella, lemon eucalyptus, and neem to name a few. <a href="http://www.coachoutlet.cc/" / rel="nofollow">coach outlet</a> Then Irvine closed with a similar challenge to everyone in the audience. ??ou see them around you every day,??he said, ??he person on the sidewalk, the woman who can?? cross the road, the one who needs a meal. Do something for them??ven if it?? just a smile or a word of encouragement. If everyone did that once every day, this world would be an unbelievable place to live.?? Both girls were charged with attempted first-degree intentional homicide in connection with the May 2014 attack on their classmate, Payton Leutner.According to a criminal complaint, the girls plotted for months before they lured Payton into some woods after a sleepover and attacked her with a knife. Payton was stabbed 19 times but survived. <a href="http://www.burberryoutlet.us.org/" / rel="nofollow">burberry outlet</a> Green Bay Notre Dame 61, Milton 35: The Tritons (21-6) held Milton to 25.5% shooting. Junior guard Allison LeClaire finished with 14 points and five assists for Notre Dame. g <a href="http://www.toryburchoutletonlines.us.org/" / rel="nofollow">www.toryburchoutletonlines.us.org</a> The company has supported the charity for years and Mr Davison was inspired to support it again on his birthday because of the care brother-in-law, Anthony Duggan, received before his death there last October. <a href="http://www.truereligion.com.co/" / rel="nofollow">true religion jeans</a> Here is a look at the Top 5 selling small crossovers in the nation, according to May figures, with a couple of other wild cards at the end. i 6 of 7 <a href="http://www.lv.net.co/" / rel="nofollow">lv</a> Humans can, and need to, reconnect with nature in such an intimate way as to depend on it for survival n The rider, originally from Murton but now living in Stockton, overcame TV personality Guy Martin as well as a top quality field to take victory in the second leg of the feature race in dismal conditions, also adding the Junior class to his list aboard his 600cc Suzuki. He backed that up with a pair of fifth places to cement his comeback after a nasty crash at the same track last year. <a href="http://www.moncleroutlet.us.org/" / rel="nofollow">moncler outlet</a> ??nce it happened I knew it was going to be all right. It feels like a coming together of my lifelong spiritual experience and my professional, practical experience. A kind of drawing together of all the threads of my life,??she says. <a href="http://www.michaelkors.us.org/" / rel="nofollow">michael kors outlet online</a> We re going to do a Kickstarter or Indigogo campaign, but we like to do unique fund raising events like this, he said. <a href="http://www.coachhandbags.us.org/" / rel="nofollow">coach handbags outlet</a> We are very close. Two or three very small details remain, Finance Minister Euclid Tsakalotos said as he emerged Tuesday morning from all-night discussions with the creditors' negotiators. c <a href="http://www.coachhandbags.us.org/" / rel="nofollow">coach handbags</a> THE Northern Echo has been determined from the beginning of 2014 to do justice to the 100th anniversary of the First World War. It is a milestone of huge significance. It is not something to be celebrated. But it must be properly marked. <a href="http://www.coachhandbags.us.org/" / rel="nofollow">coach handbags outlet</a> Pediatrician Corinn Cross says with so many children overscheduled during the school year, downtime is critical over the summer. Wilson said there was a lot of lightning in Monday?? storm on the Bitterroot Forest. He expects some of those strikes will turn into new fire starts. <a href="http://www.louisvuittonbags.name/" / rel="nofollow">louis vuitton outlet</a> MIDDLETON in TEESDALE. - Tues. Fwd: 92 store cattle. Bullocks. - Char X: ?548 Summery Hill; ?540 Revelin. Lim X: ?590, ?560, ?530 Ash Dub; ?590, ?580, ?540 Forcegarth; ?588, ?585, ?582, ?560 Toft i <a href="http://www.lv.net.co/" / rel="nofollow">lv handbags</a> On another occasion, Dunston Mechanics club on would only give him half the agreed fee. Trevor asked why they hadn?? paid him off at the interval. ??e thowt thoo might be better in the second half,??said the concert secretary. ??hoo wasn??.??<a href="http://www.truereligionoutlet.mobi/" / rel="nofollow">true religion jeans</a> How is Buzz Points different than other customer loyalty programs? Buzz Points focuses on community financial institutions and local merchants to engage customers. The focus on community and local merchants is unique.
          Senior Cloud Engineer/Big Data Architect / Data Science - Corporate Technology - New York, NY   
Perform data analysis with business, understanding business process and structuring both relational and distributed data sets....
From Corporate Technology - Wed, 14 Jun 2017 16:40:06 GMT - View all New York, NY jobs
           Les start-ups belges font le plein de cash    
22:30 Au cours du premier semestre 2017, les start-ups belges ont réussi à attirer 172 millions d’euros. Collibra, le spécialiste en "big data", arrive en tête de classement avec une levée de capitaux de 44...
          How AI And Machine Learning Are Helping Drive The GE Digital Transformation   
This is how GE has accomplished a digital transformation by leveraging AI and machine learning fueled by the power of Big Data.
          Does the World Need Cloudera?   
Cloudera’s recent IPO was probably one of the most anticipated public offerings of the season. The big data company, founded in 2008, was the first company to build a business around the professional deployment and support of Hadoop (MapR was created in 2009 and Hortonworks was founded in 2011).
          How Women Are Shaping The Big Data Revolution   
Increasingly, women executives are being called upon to take the lead in shaping the critical business functions that are most necessary to ensuring business value from Big Data and analytics investments.
          Executives Report Measurable Results From Big Data, But Challenges Remain   
After a half decade of investment, and periods of trial and error, a near majority of business executives now report successful results from their Big Data investments.
          Senior Software Engineer for Big Data & Data Analitycs SEAD-0617-VM - Smartercom Italia srl - Milano Nord, Lombardia   
Smartercom Italia, azienda specializzata in ambito TLC e consulenza ICT, per ampliamento gruppi di lavoro presso propri clienti in Vimercate (Monza Brianza),
Da Indeed - Thu, 29 Jun 2017 11:32:09 GMT - Visualizza tutte le offerte di lavoro a Milano Nord, Lombardia
          Software Engineer Big Data & Data Analitycs - Smartercom Italia srl - Milano, Lombardia   
Smartercom Italia, azienda specializzata in ambito TLC e consulenza ICT, per ampliamento gruppi di lavoro presso propri clienti in Vimercate (Monza Brianza),
Da IProgrammatori.it - Thu, 29 Jun 2017 12:55:53 GMT - Visualizza tutte le offerte di lavoro a Milano, Lombardia
          Splunk Notes (Paul Magnusson)   
Except for the crappy name this stock lends itself to a cool and compelling story. It is Big Data, Artificial Intelligence, The Internet of Things and Cyber Security rolled into one. It produces software for searching, monitoring and analyzing machine generated data. In providing insights and options to management that circumvent filtering by I. T.
          [Kafka-users] Big Data Interview Questions (Chaturvedi Chola)   
a very good book on big data interview preparation https://notionpress.com/read/big-data-interview-faqs chaturvedi -- Chaturvedi Chola
          Wal-Mart Prods Partners, Vendors to Leave AWS for Azure   

One provider of big data management services said it opted to host applications on Microsoft Azure instead of AWS, expressly to win business from a tech firm with a Wal-Mart account. Read More


          Associate Technical Consultant-Big Data & Analytics Job - Newtown Square, PA   
As market leader in enterprise application software, SAP helps companies of all sizes and industries innovate through simplification. From the back office to the boardroom, warehouse to storefront, on premise to cloud, desktop to mobile device SAP empowers people and organizations to work together more efficiently and use business insight
          Software Development Manager: Big Data and Data Warehouse - Amazon Dev Center India - Hyderabad - Hyderabad, Telangana   
The successful candidate will be responsible for leading a team to define and build a scalable software platform that will enable zero touch operation of a...
From Amazon.com - Tue, 27 Jun 2017 08:56:56 GMT - View all Hyderabad, Telangana jobs
          Ensure your big data initiatives are on the right path   
Access our latest expert guide The Path to Payoff on Big Data Analytics to help you get the big data ball rolling. This three-part guide provides real-life examples of successful big data analytics efforts and project management advice. Published by: TechTarget
          Senior Cloud Engineer/Big Data Architect / Data Science - Corporate Technology - New York, NY   
Perform data analysis with business, understanding business process and structuring both relational and distributed data sets....
From Corporate Technology - Wed, 14 Jun 2017 16:40:06 GMT - View all New York, NY jobs
          DevOps Engineer - GlaxoSmithKline - Upper Providence, PA   
Provide direction for hardware, OS, and associated components for the platform whilst operating in a regulated pharmaceutical environment. Big Data Experience:....
From GlaxoSmithKline - Fri, 19 May 2017 18:39:00 GMT - View all Upper Providence, PA jobs
          Supervisor Region Operations   
WA-Vancouver, CenturyLink (NYSE: CTL) is a global communications and IT services company focused on connecting its customers to the power of the digital world. CenturyLink offers network and data systems management, big data analytics, managed security services, hosting, cloud, and IT consulting services. The company provides broadband, voice, video, advanced data and managed network services over a robust 265,
          DevOps Engineer - GlaxoSmithKline - Upper Providence, PA   
Install and maintain all relevant OS components and utilities. Install and maintain MongoDB components and utilities within the platform. Big Data Experience:....
From GlaxoSmithKline - Fri, 19 May 2017 18:39:00 GMT - View all Upper Providence, PA jobs
          Sodelavec v informatiki - SA, poslovni analitik, vodja projekta   
Zahtevane veščine in znanja: VI. ali VII. stopnja strokovne izobrazbe (računalniške ali druge primerljive smeri), poznavanja vsebin s področja energetskega upravljanja, poznavanje baz podatkov, big data tehnologije, poznavanje IoT tehnologije, znanja s področja poslovne analitike in optimizacije poslovnih procesov, znanja s področja vodenja projektov.
          Innovationsrunde rund um digitale Transformationsprozesse   
Die Themen Digitalisierung, Industrie 4.0, Internet of Things, Big Data oder Business Analytics haben in den letzten Jahren zunehmend Fahrt aufgenommen. Es herrscht Aufbruchstimmung. Der Weg in eine neue Welt? Laut einer Umfrage des Digitalverbands Bitkom zählt die Digitalisierung zu den wichtigsten...
          Big Data key to boxship future   

Companies operating in the container shipping sector are working hard to improve the way they use big data, realising it will play a key role in the future of the business. That’s what Peter Sand, Chief Shipping Analyst at BIMCO , told MarineTraffic in the lead up to the Global

Continue Reading

The post Big Data key to boxship future appeared first on MarineTraffic Blog.


          COL-ESPECIALISTA DATOS BIG DATA - Telefonica - Bogotá, Cundinamarca   
(Aplica para empleados(as) directos(as) y temporales). ESPECIALISTA DATOS BIG DATA....
De Telefónica - Fri, 30 Jun 2017 13:16:48 GMT - Ver todos: empleos en Bogotá, Cundinamarca
          COL - EJECUTIVO VENTA ESPECIALISTA BIG DATA - Telefonica - Bogotá, Cundinamarca   
(Aplica para empleados(as) directos(as) y temporales). Profesional Desarrollo de Negocio / Producto Big Data....
De Telefónica - Fri, 30 Jun 2017 13:16:47 GMT - Ver todos: empleos en Bogotá, Cundinamarca
          COL - Profesional Desarrollo de Negocio / Producto Big Data - Telefonica - Bogotá, Cundinamarca   
(Aplica para empleados(as) directos(as) y temporales). Profesional Desarrollo de Negocio / Producto Big Data....
De Telefónica - Fri, 30 Jun 2017 13:16:47 GMT - Ver todos: empleos en Bogotá, Cundinamarca
          Technology Lead| Big Data-Data Store| GreenPlum   

          1st IEEE International Conference on Big Data 2013   
Big Data is one of the most talked about topics of today across industry, government and research. It is becoming the center of Investments, Innovations and Improvization‎s (3I’s), and no exaggeration to say that Big Data is Transforming the World. Considering it’s potential the IEEE Computer Society is conducting the IEEE International Conference on Big […]
          One More Step Closer “Towards an Industry Standard for Benchmarking Big Data Workloads”   
Following the successful workshop “Towards an Industry Standard for Benchmarking Big Data Workloads” (WBDB 2012) held in May 2012 in San Jose [2],  the Second Workshop on Benchmarking Big Data Workloads (WBDB2012.in) [1] will be held in Pune, India from 17 to 18 December at the Hinjewadi Campus of Persistent Systems Ltd, colocated with the 18th International […]
          Internship - Artificial Intelligence, Big Data & Rapid Prototyping - Société Générale - Montréal, QC   
SGCIB has positioned itself as a leader in interest rate and currency derivatives on the Canadian market. Société Générale Corporate &amp; Investment Banking (SGCIB... $18 - $21 an hour
From Société Générale - Fri, 30 Jun 2017 00:06:44 GMT - View all Montréal, QC jobs
          Smart Asset Management for Electric Utilities: Big Data and Future. (arXiv:1706.09711v1 [cs.OH])   

Authors: Swasti R. Khuntia, Jose L. Rueda, Mart A.M.M. van der Meijden

This paper discusses about needs and ways to improve predictive maintenance in the future while facilitating the electric utilities to make smarter decisions about when and where maintenance should be performed. Utilities have been collecting data in large amounts but they are hardly utilized because they are huge in amount and also there is uncertainty associated with it. Condition monitoring of assets collects large amounts of data during daily operations. The question arises 'How to extract information from this large chunk of data?' The concept of 'rich data and poor information' is being challenged by big data analytics. Along with technological advancements like Internet of Things, big data analytics will play an important role for electric utilities. The aim will be to make the current asset management more smarter than it was, and this work describes some pathways.


          Data Scientist - Data Insights & Analytics - 99 TAXIS - São Paulo, SP   
Has been worked with big data technologies like Storm, Samza, Hadoop, Presto, Airflow; 99's mission is to make transportation cheaper, faster and more efficient...
De 99 TAXIS - Sat, 17 Jun 2017 08:59:04 GMT - Visualizar todas as empregos: São Paulo, SP
          Data Scientist - Growth - 99 TAXIS - São Paulo, SP   
Has been worked with big data technologies like Storm, Samza, Hadoop, Presto, Airflow; 99's mission is to make transportation cheaper, faster and more efficient...
De 99 TAXIS - Sat, 10 Jun 2017 05:51:08 GMT - Visualizar todas as empregos: São Paulo, SP
          Data Scientist - RA - CSI - 99 TAXIS - São Paulo, SP   
Has been worked with big data technologies like Storm, Samza, Hadoop, Presto, Airflow; 99's mission is to make transportation cheaper, faster and more efficient...
De 99 TAXIS - Sat, 10 Jun 2017 05:51:08 GMT - Visualizar todas as empregos: São Paulo, SP
          Data Engineer - 99 TAXIS - São Paulo, SP   
Has been worked with big data technologies like Storm, Samza, Hadoop, Presto, Airflow; 99's mission is to make transportation cheaper, faster and more efficient...
De 99 TAXIS - Sat, 10 Jun 2017 05:51:07 GMT - Visualizar todas as empregos: São Paulo, SP
          Data Engineer - Growth - 99 TAXIS - São Paulo, SP   
Have been worked with big data technologies like Storm, Samza, Hadoop, Presto, Airflow; 99's mission is to make transportation cheaper, faster and more...
De 99 TAXIS - Sat, 10 Jun 2017 05:51:06 GMT - Visualizar todas as empregos: São Paulo, SP
          Data Scientist - Mkt Place Routing - 99 TAXIS - São Paulo, SP   
Has been worked with big data technologies like Storm, Samza, Hadoop, Presto, Airflow; 99's mission is to make transportation cheaper, faster and more efficient...
De 99 TAXIS - Sat, 10 Jun 2017 05:51:05 GMT - Visualizar todas as empregos: São Paulo, SP
          A Comprehensive Platform for Big Data Governance, Data Management, and Analytics   
IDC has estimated that since 2011, digitally generated data has been doubling by the zettabytes every two years. Discover 5 categories of big data that need to be analyzed and governed. Published by: SAS
          Big data, útil herramienta de la creatividad   

El dominio de la creatividad visual no se limita a las técnicas gráficas, sino que necesita involucrarse con aspectos de marketing, publicidad y sus diferentes herramientas digitales que ayudan a apuntalarlas, una de ellas está en Big Data.

The post Big data, útil herramienta de la creatividad appeared first on paredro.com.


          Singapore Shipping Association adds new faces to Council   
Singapore Shipping Association adds new faces to Council

JUNE 30, 2017 — The Singapore Shipping Association (SSA) says that new faces on its Council are bringing on board expertise to complement and strengthen the core group.

At its annual general meeting yesterday, seven Ordinary Members nominated for the 2017/2019 Council were returned. The seven members are:

A. P. Moller Singapore Pte Ltd
Mr. Rene Piil Pedersen, Managing Director

Enesel Pte Ltd
Mr. Esben Poulsson, Chairman

Hong Lam Marine Pte Ltd
Ms. Caroline Yang, Executive Director, Legal, Finance & Safety

IMC Shipping Co Pte Ltd
Mr. Lim Sim Keat, Managing Director, Transport Logistics

Iseaco Investment Pte Ltd
Ms. Katie Men, Managing Director

NYK Group South Asia Pte Ltd
Mr. Shunichiro Mizukami, Chairman & Managing Director

Pacific International Lines (Pte) Ltd
Ms. Lisa Teo, Executive Director, Corporate Development

A further five Councillors were subsequently co-opted into the SSA


Allen & Gledhill LLP
Mrs. Gina Lee-Wan, Partner

HSH Nordbank AG, Singapore Branch
Mr. Lee Keng MunHead of Shipping (Asia)

DNV GL Singapore Pte. Ltd.
Mr. Steen Brodsgaard Lund, Vice President, Regional Manager, South East Asia and Pacific Maritime

M3 Marine Group Pte Ltd
Capt. Mike Meade Chief Executive Officer

Kontiki Shipping Pte Ltd '
Mr. Ng Ee Ping, General Manager

SSA President Esben Poulsson said: "We have aimed to co-opt Councillors with specific expertise across a range of disciplines, and my new team brings on board over two centuries' worth of accumulated maritime expertise in their specialized fields. I have the utmost confidence that they are fully committed and charged up to take the SSA in the run-up to the next decade.

"Drawing inspiration from our new motto 'Navigating the Future', we will actively engage our 460 strong membership to identify and promote our industry's views on issues of importance to maritime stakeholders at home and abroad. We will endeavour to further strengthen the SSA as a positive force in the maritime community and continue to stand by its long-term interest in enhancing Singapore's status as an international maritime center.

"For the next two years, the SSA Council believes SSA must continue its ongoing efforts to grow the Singapore War Risks Mutual (SWRM) – this will further develop the role of marine insurance in Singapore, and to further entrench Singapore as an important maritime insurance hub. Likewise, the development of future leaders and the creation of a Singapore Clause is a key element of the SWRM initiative.

"The Council believes that the impact of Digitalization and other potentially disruptive technologies within the shipping industry will be central to the work of the SSA in the coming years. Through our continuous dialogue with the MPA, we will seek to cooperate with relevant stakeholders, to reap the opportunities from Big Data and Digital innovation, in order that our industry can stay agile and gain improved productivity, in the next wave of digitalization and other technologies.

"Furthermore, through our expanded international outreach, we aim to enhance our relationships with the ASA, FASA, and the ICS to ensure that SSA will be able to speak with authority on the many challenging regulatory issues confronting our industry as the respected voice of our membership in the international arena.

"In all these endeavors, we will continue to work closely and collaboratively with other Maritime Singapore stakeholders, especially the MPA, the SMF, the SMEF and SMOU & SOS, to coordinate many worthwhile initiatives – always with the overriding aim of benefitting our members as we navigate the future together.''


          Global Industry Alliance aims to move shipping to low carbon future   
GIA was officially inaugurated at a launch ceremony held at IMO headquarters in London

JUNE 29, 2017 — Leading shipowners and operators, classification societies, engine and technology builders and suppliers, big data providers, and oil companies have signed up to a new Global Industry Alliance (GIA) to support transitioning shipping and its related industries towards a low carbon future.

Thirteen companies have signed up to launch the GIA, under the auspices of the GloMEEP Project, a Global Environment Facility (GEF)-United Nations Development Program (UNDP)-International Maritime Organization (IMO) project aimed at supporting developing countries in the implementation of energy efficiency measures for shipping.

According to an IMO briefing, the GIA partners will collectively identify and develop innovative solutions to address common barriers to the uptake and implementation of energy efficiency technologies and operational measures. Focusing on a number of priority areas including energy efficiency technologies and operational best practices, alternative fuels, and digitalization, activities likely to be undertaken or promoted will include: research and development; showcasing of advances in technology development and positive initiatives by the maritime sector; industry fora to encourage a global industry dialogue; and the implementation of capacity building and information exchange activities.

The GIA was officially inaugurated June 29 at a launch ceremony held at IMO headquarters, where the first meeting of the IMO Intersessional Working Group on Reduction of GHG emissions from ships was being held.

In his GIA launch speech, IMO Secretary-General Kitack Lim said the new alliance would help shipping to make its contribution towards greenhouse gas reduction and the mitigation of climate change, a key target for the United Nations under its Sustainable Development Goals (SDGs).

"What we are witnessing today is the formal start of a tried and tested partnership concept which has the potential to boost still further our efforts to kick-start the change that society demands and create a firm, tangible basis to transform the shipping sector for the better," Mr Lim said. "Under this new public-private partnership initiative, these 'industry champions', which come from different sectors of the industry and may have different business strategies within the same sector, are coming together to contribute to tackling the challenges of decarbonizing the shipping sector."

Following the announcement by the GloMEEP Project of its intention to establish the GIA, thirteen companies have agreed to become the founding members of the GIA, although it is expected that more companies may join the GIA even after the launch. The thirteen members that have formally committed to joining the alliance are:

ABB Engineering (Shanghai) Ltd.;
DNV GL SE;
Lloyd's Register EMEA;
MarineTraffic;
MSC Mediterranean Shipping Company S.A.;
Ricardo UK Ltd;
Royal Caribbean Cruises Ltd.; 'Shell International Trading and Shipping Company Limited;
Silverstream Technologies;
Stena AB;
Total Marine Fuels Pte Ltd;
Wärtsilä Corporation;
Winterthur Gas & Diesel Ltd.

These companies are supporting the overall goals of the GIA by providing their expertise and know-how in the area of maritime fuel efficiency, as well as contributing financially towards the GIA Fund from which GIA activities will be funded.

Following the official GIA launch, the first GIA Task Force meeting was convened to discuss work modalities and kick-off the GIA work.


          Big Data and the Stalker Economy   
VideoCrack is being served in Silicon Valley.An enthusiastic crowd of geeks and suits -- all of them "data scientists" -- just spent three days at the O'Reilly Strata conference (#strataconf) in Santa Clara. All over the event's menu is the crack cocaine of our day: big data. A couple decades ago, [...]
          Big Data Lead   

          Big (Data) Problems   

Like the buzz that started with SO-LO-MO, companies know that big data is important. And like SO-LO-MO, they don’t really know where to get started or how to get the biggest bang out of this big-data-buck. Companies all over the Silicon Valley are growing tired of the “big data” buzz. As they should, the term was first coined in 1998 (according to the WSJ) so it’s actually about 15 ... Read More »

The post Big (Data) Problems appeared first on Baynote.


          AT&T exec: Big data enables customer-first predictive strategies for omnichannel   
NEW YORK – An AT&T executive at Forrester’s CXNYC 2016 said that the company is heavily focused on mining big data as it thinks about targeting multi-device users and ways to predict and be proactive with customers.
          Thinking, Fast and Slow: Big Data, Sandy and the Media   
A couple of days back, after watching the addictive stream of fake and real pictures emerging from New York, and following the fascinating real-time efforts by the Atlantic and the "Is Twitter Wrong?" tumblog at telling them apart, I was inspired to post this little note on my Facebook profile: [...]
          The End of Pax Papyra and the Fall of Big Paper   
Perhaps it is because I know too much about paper for my own good, thanks to a decade of involvement in various kinds of publishing and a four-year stint at Xerox, but the more I explore the emerging world of Big Data, the more I keep going back to the [...]
          Can You Use Big Data? The Litmus Test   
Over the past few days at Strata, I've been trying to make sense of the whole Big Data scene, and I keep returning to the question: under what conditions can a business make use of this stuff? I am generally a skeptic, but not the low-imagination kind of skeptic who [...]
          Data centre meteo a Bologna, la soddisfazione del ministro Galletti   
Dopo l’approvazione in Consiglio dei Ministri del disegno di legge sul Big Data Centre Meteo Europeo di Bologna si avvia l'iter
          Vice President - Big Data - CitiRisk Retail - -   
Vice President - Big Data - CitiRisk Retail','17004425','!*!Responsibilities:   Job Responsibilities: RoleŸ Requirements analysisŸ Development in Hadoop (Big Data Analytics/Mongo DB) Build Enrichment...
          New Approaches to Ethno-Linguistic Maps   

humanswhoreadgrammars:

This post originates from the HWRG-blog. Please note that there are multiple authors of HWRG and that the most updated version of this blogpost can be found here: http://ift.tt/2ubG9BB.
___________________________________________
New Approaches to Ethno-Linguistic Maps

I’m excited to give a guest blog post here at humans who read grammars on new methods in language geography.  I’m a geographer by trade, and I am currently a PhD student at the University of Maryland.  I also work for an environmental nonprofit - Conservation International - doing data science on agriculture and environmental change in East Africa.  Before ending up where I am now, I lived for some time in West Africa and the Philippines.  During my time in both of those linguistically-rich areas, I became quite interested in language geographies and linguistics more generally.  Spurned on by curiosity and my disappointment in available resources, I’ve done some side projects mapping languages and language groups, which I’ll talk about here.

Problems with Current Language Maps

Screen Shot 2017-06-26 at 11.23.48 PM.png
A map of tonal languages from WALS.  Fascinating at a global scale, but unsatisfying if you zoom in to smaller regions.
One major issue with most modern maps of languages is that they often consist of just a single point for each language - this is the approach that WALS and glottolog take.  This works pretty well for global-scale analyses, but simple points are quite uninformative for region scale studies of languages.  Points also have a hard time spatially describing languages that have disjoint distributions, like English, or languages that overlap spatially. See here for a more in-depth discussion of these issues from Humans Who Read Grammars

One reason that most language geographers go for the one-point-per-language approach is that using a simple point is simple, while mapping languages across regions and areas is very difficult.  An expert must decide where exactly one language ends and another begins.  The problem with relying on experts, however, is that no expert has uniform experience across an entire region, and thus will have to rely on other accounts of which language is prevalent where.  This is how, for example, the Murdock Map of African ethno-linguistic groups was created.  As a continental scale map, it is rich and fascinating.  However, looking for closely at specific region, and the map seems to have problems - how did Murdock know exactly the shape of each little wiggle identifying the boundary between two groups?  What about areas where two different groups overlap?  Other issues can arise when trying to distinguish distinct groups when often the on-the-ground reality is that a language may exist as a dialect continuum, something that subjectively drawing polygons does not readily account for.

These maps can have real import when they form the foundation of other analyses. Researchers have examined whether ethnic diversity in developing countries, and in Africa in particular, can hamper economic development and lead to conflict. Scientists disagree, although many analyses use the Murdock map. See some of this research here, here and here. Another study, recently published in Science, looked at Internet penetration in areas where politically excluded ethnic groups live. They found that groups without political power were often marginalized in terms of internet service provision. However, their data for West Africa, which came from the Ethnic Power Relations database, was quite rough: all of southern Mali was one ethnic group labeled “blacks” while the north was labeled as “Tuaregs” or “Arabs”, while there was no data at all for Burkina Faso.  While their findings were important and they did the best that they could with available datasets, a less informed analysis from the same data could end up looking like linguistics done horribly wrong.  We need better ethno-linguistic maps simply to do good social science and address these critical questions.

New Methods and Datasets

I believe that, thanks to greater computational efficiency offered by modern computers and new datasets available from social media, it is increasingly possible to develop better maps of language distributions using geotagged text data rather than an expert’s opinion.  In this blog, I’ll cover two projects I’ve done to map languages - one using data from Twitter in the Philippines, and another using computationally-intensive algorithms to classify toponyms in West Africa.

I should note that for all its hype, big data can be pretty useless without real-world experience.  The Philippines and West Africa are two parts of the world where I have spent a good amount of time and have some on-the-ground familiarity with the languages.  Thus, I was able to use my local knowledge to inform how I conducted the analyses, as well as to evaluate their issues and shortcomings.

Case Study 1: Social Media From The Philippines

Many fascinating language maps from twitter have been created at global scales - see here, and here.  However, to explore the distribution of understudied languages that don’t show up in maps of global languages, one must use more bespoke methods.  This is especially true of austronesian languages like those found in the Philippines, which don’t have a lot of phonemic variability, and therefore aren’t easily classified using the methods that google translate uses.  These methods, which rely on slices of the sample text, often confuse austronesian languages like Tagolog and Bahasa - just look at the maps I mentioned above. Thus, I had to use a word-list method, and created word lists from corpora offered by SEAlang, and by scraping from local-language wikipedia articles.  The resulting maps show exactly where minority languages are used in comparison with English and Tagalog in the philippines, and likely underestimate the prevalence of minority languages because the corpora used (wikipedia and the bible) are quite different from the twitter data that was classified.

Languages of Tweets in the Philippines.
The resulting map shows about 125,000 tweets in English, Tagalog, Taglish (using Tagalog and English in the same tweet), and the local languages Cebuano, Ilocano, Hiligaynon, Kapampangan, Bikol, and Waray.  This map offers more nuance than traditional language maps of the Philippines.  For example, most maps would show Ilocano over the entire northern part of Luzon, but this map shows that the use of Ilocano is much more robust on the northwest coast than in the rest of the north.  This analysis also allowed me to test a hypothesis that I frequently heard locals assert when in the Philippines - that English is more common in the south, because southerners would rather use English than Tagalog, which is seen as a northern language.  I found that this was to be the case, and I was only able to confirm this because I had such a large sample size.  Without newer datasets like those offered by social media, this hypothesis would be untestable.

To see a more in-depth description of this analysis, you can see my original blog post here.

Case Study 2: West African Toponyms

Another project I did used toponyms, or place names, from West Africa.  Toponyms databases like geonames.org have relatively high spatial resolution - with a name for every populated place in an area.  And while a place name is not as long as a tweet or other linguistic dataset, toponyms do encode ethno-linguistic information.  It would be easy for someone familiar with Europe to distinguish whether a toponym is associated with the French or German linguistic group - a French name would likely begin with “Les” and end with “-elle”, while a German name could begin with “Der” and end with “-berg”.  Similar differences exist between toponyms from different ethnic groups all over the world, and are quite evident to locals.  What if you could train an algorithm to detect these differences, and then had it classify every single toponym throughout a region?  That is what I tried to do in this analysis.

I used toponyms for six countries in French West Africa. I decided to focus on French West Africa for several reasons. For one, I have worked there, and have some familiarity with the ethnic groups of the region and their distributions, and it is an area I am very curious about. For another thing, this is a relatively poorly documented part of the world as far as ethno-linguistic groups go, and it is an area with significant region-scale ethnic diversity. Finally, the countries I selected were colonized by one group, meaning that all of the toponyms were transliterated the same way and could be compared even across national borders. In all, I used 35,785 toponyms.

First, I got a list of every possible set of three letters (called a 3-gram) from the toponyms.   Then, I tested for spatial autocorrelation in the locations that contained each 3-gram using a Moran’s I test, and selected only those 3-grams that had significant clustering.

To give an illustration of why this was necessary, here are two examples of the spatial distribution 3-grams. One 3-gram - “ama” - occurs roughly evenly throughout the regions in this study. The other 3-gram - “kro” - is very common in toponyms in south-east Côte d'Ivoire, and virtually nonexistent in other areas. Thus, “kro” has significant spatial autocorrelation whereas “ama” does not.

Here are all of the toponyms that contain the 3-gram “kro" 


And here are all of the toponyms that contain the 3-gram “ama" 

Thus, the the 3-gram "ama” doesn’t tell us much about which ethnic group a toponym belongs to, because that 3-gram is found evenly distributed throughout West Africa - it is just noise. The 3-gram “kro”, on the other hand, carries information about which ethnic group a toponym belongs to, because it is clearly clustered in a group in Southeast Côte d'Ivoire.

I then calculated the lexical distance between all of the toponyms based on the number shared 3-grams that had significant spatial autocorrelation.  To add a spatial component, I also linked any two toponyms that were less than 25 kilometers apart. Thus, I had a graph where every toponym was a vertex, and undirected edges connected toponyms that had spatial or lexical affinity.  Finally, I used a fast greedy modularity-optimizing algorithm to detect communities, or clusters, in this graph.

Results
The algorithm found seven distinct communities, which definitely correspond to ethnic groups and ethnic macro-groups in West Africa.


The red cluster includes Wolof, Serer, and Fulfulde place names, which makes sense, as all of these groups are Senegambian languages. This group of languages is the primary group in Senegal and Mauritania, which my classification picked up on. It also caught the large Fulfulde presence in central Guinea, throughout an area known as the Fouta-Djallon. This cluster also has a significant presence throughout the Sahel, stretching into Burkina Faso and dotted throughout the rest of West Africa, much like the migrant Fulfulde people.

The green cluster captures most of the area where Mandé languages are spoken, including most of Mali, where the Bambara are found, as well as Eastern Guinea and Northern Côte d'Ivoire, where Malinké is found. Interestingly, most of the toponyms in Western Mali fell into the Senegambian/Fulfulde cluster, and were not in the Mandé cluster, even though there are Mandé groups like the Soninké and Khassonké in Western Mali. Southern Guinea is densely green, representing the presence of Mandé groups there, like the Kuranko. Surprisingly, much of central and southern Côte d'Ivoire also fell into the green cluster, even through there are a couple of different groups there which are not in any way related to the Mandé groups that were most represented in the green cluster. This is also true of areas in Western Burkina Faso and Eastern Mali, where there are many languages unrelated to the broader Mandé group, such as Dogon, Bobo, Minianka, and Senufo/Syempire. However, I know that Dyula, a Mandé language closely related to Bambara, is spoken as a trade language in both of these areas (Côte d'Ivoire and Western Burkina Faso). It could be that Dyula has had a long enough presence in these areas to leave an imprint on the toponyms there.

The purple group pretty clearly captured two different disjoint groups that are both in the broader Mandé group - the Susu, in far Western Guinea, and the Dan, in Western Côte d'Ivoire. These groups are normally classified as being on quite separate branches of the Mandé language family, with the Susu being Northern Mandé and Dan being Eastern Mandé. However, the fact that the algorithm put them in the same group, even though they were too far apart to have edges/connections based on spatial affinity, shows that Dan and Susu toponyms have several three-grams common.

The yellow cluster seems to have caught two sub-groups within the broader green/Mandé cluster. Many of the yellow toponyms in central Mali are in what you could call the Bambara homeland, between Bamako and Segou. However, a second cluster stands out quite distinctly in southern Guinea. It’s unclear to me what group this could represent and why it would have toponymic features distinct enough from its neighbors that the algorithm put it in a different cluster. Some maps say that a group called the Konyanka lives here and speaks a language closely related to Malinké.

The turquoise cluster quite clearly captures the Mossi people and their toponyms, as well as the Gurunsi, a related group (both Mossi and Gurunsi are classified as Gur languages).

The black cluster in southern Burkina Faso captured a group that most national ethno-linguistic maps call the Lobi, although this part of West Africa is known for its significant entho-linguistic heterogeneity. Another group of villages in Eastern Burkina Faso also fell into the black cluster, although I could not find any significant ethnic group found there.

Finally, the blue cluster captured both the Baoulé/Akan languages as well as the Senufo. It captured the Senufo especially in Côte d'Ivoire and somewhat in Burkina Faso, but not much in Mali, where I know the Senufo have a significant presence. This could represent a Bambarization of previously Senufo toponyms due to the fact that the government of Mali is predominantly Bambara, or it could pre-date the Malian state, as this area was part of Samori Toure’s Wassoulou Empire, in which the Malinké language was strongly enforced. The classification of the Senufo languages has always been controversial, but this toponymic analysis suggests that they are more related to Kwa toponyms to the south rather than to Gur toponyms to the northeast.

Caveats

Some caveats with this work and its interpretation. For one, this only shows toponymic affinities. Those affinities usually correspond to ethnic distributions, but not always. There is a lot of migration in West Africa today, and place names don’t usually change as quickly as the distributions of people. Thus, toponyms can sometimes encode historic ethnic distributions, for example many toponyms in the United States come from Native American languages, and there are many toponym suffixes in England that reflect a historic Nordic presence. Thus, this and similar maps are most informative when interpreted in combination with on-the-ground information and knowledge.

Another issue with classifying toponyms in West Africa in particular is that West African toponyms are transcribed using the Latin alphabet, which definitely does not capture all of the sounds that exist in West African languages. Different extensions of the Latin alphabet, as well as an indigenous alphabet, are often used to transcribe these languages, however these idiosyncratic methods of writing languages are not used in the geonames dataset. Thus, the Fulfulde bilabial implosive (/ɓ/ in IPA) is written the same way as a pulmonic bilabial plosive - as a “b”, so this distinction is lost in our dataset, even though it adds a lot of information about what ethnic group a given toponym belongs to. However, some other sounds and sound combinations, which are very indicative of specific languages are captured using a Latin alphabet- for example prenasalized consonants (/mb/) common in Senegambian languages, labial velars (/gb/ and /kp/) common in coastal languages, or the lack of a ‘v’ in Mandé languages. Issues also arise with how different colonizers transcribe sounds differently, for example ‘ny’ and 'kwa’ in English would be 'gn’ and 'coua’ in French. However, this didn’t apply in this analysis, which only used Francophone countries, and I believe it could be dealt with if I tried to do a larger analysis.

Conclusion


This is an exciting time to be at the intersection of geography and linguistics!  New datasets and computational methods are giving researchers the ability to ask newer and better questions about who belongs to what group, and where.  I hope new developments in this research can yields new linguistic results about phylogeny, migration, and the spread of linguistic phenomena.  Outside of the field of linguistics, better language maps could have broad applications, from improving disaster response planning to helping to answer critical questions about the origins of ethnic conflict.


          (USA-FL-Tampa) Solution Account Manager, Partner Sales   
Do you want to help eliminate barriers between ideas and business outcomes? We want you to bring your unique experiences and creative ideas to the table. CA Technologies provides software and solutions that help our customers to develop, manage, and secure complex IT environments to increase productivity and enhance competitiveness in their businesses. It’s our aim to encourage global collaboration and results-oriented innovation, while supporting and developing our talented people and our communities. CA Technologies will empower you to drive authentic success, for both the business and yourself in the application economy. * Solution Account Manager Security, Partner Sales* Ready to do the best work of your life? As a Solution Account Manager at CA, you will be responsible for selling our Security Solutions to help companies solve their IT pain points. This role is responsible for contributing to CA’s success and customer’s success by building key relationships within assigned accounts and leveraging CA solutions to securely compete in the digital era. * About CA:* CA Technologies is a Fortune 1000 company with a startup mentality – and we’re searching for incredible, bright talent to dominate in the marketplace. Sure, CA has been a leading software company for nearly four decades, with a global customer base that includes the majority of the Fortune 2000 - but what excites us today is the opportunity to redefine the future of our industry in the age of the cloud, mobile, social and big data. We have a daring vision and a powerful, expanding solution set that helps the world’s most successful companies realize their boldest objectives. For more information, visit www.CA.com/innovation. * About the Role:* In this role, you’ll help support CA Technologies charter to transform the IT industry by: * Managing and growing large-company accounts * Building strong relationships and identifying customer needs * Delivering value to customers * Making persuasive product presentations * Closing through logical, incremental steps * How You’ll Stand Out:* The perfect candidate for this role will have a demonstrated record of success in positions of increasing responsibility over the course of their career. She/he will have an outstanding track record and reputation for achieving sales goals by selling enterprise software solutions. They will be sharp, professionally aggressive, and love to hunt. An ideal background will include: * Typically 5 years of experience in enterprise software sales * Experience selling Security Software such as Identity Management, Web Access Management, GRC, Privileged Access, API Security, or similar solutions * Demonstrated ability to close net new business within existing accounts * Experience in crafting large complex software deals * Expertise in the areas of new business development, relationship building, pipeline management, and closing enterprise software deals. If you want to fulfill your potential, be acknowledged for your achievements, and be given autonomy to make decisions for your business and customers; if you want to work with a company that respects you as an individual - recognizing both your needs at work and your responsibilities outside of it - then CA Technologies is where you belong. At CA Technologies your passion and expertise can directly impact the business and you’ll help offer our customers practical approaches to delivering new, innovative services and value through IT. We offer competitive salary, company-sponsored premium Medical/Prescription & Dental Plans, company-paid Holidays, Vacation, Anniversary Service and Sick Days, 401(k) Plan, Education/Training Reimbursement, Charitable Gift Program, Adoption Assistance Program. Learn more about CA Technologies and this opportunity now at http://ca.com/careers We and all of our subsidiaries are equal opportunity employers. As such, it is our corporate policy to fill positions with qualified candidates regardless of the candidate’s race, color, sex, age, religion, ancestry, national origin, citizenship status, marital status, sexual orientation, gender identity, genetic information, disability, pregnancy, military status, veteran status or any other protected group status. / *Note to Recruiters and Placement Agencies*: We do not accept unsolicited agency resumes. Please do not forward unsolicited agency resumes to our website or to any of our employees. We will not pay fees to any third party agency or firm and will not be responsible for any agency fees associated with unsolicited resumes. Unsolicited resumes received will be considered our property and will be processed accordingly./ / If you require an accommodation with the online application process, please contact Talent Acquisition at 1-800-454-3788./ EOE/Min/Women/Veterans/Disabled
          (USA-FL-Tampa) Data Architect   
Kforce has a client seeking a Data Architect in Tampa, Florida (FL).Essential Functions: * Working directly with the Lead Architect, CS Analyst, and the Data Researcher, ultimately responsible for data governance and analytics initiatives in support of the project * Design and develop visualization and data fusion applications that combines deep data and analytics skills to support the CENTCOM MISO needs (data correlation, TAA, MOE/MOP, etc.) * Work directly with CENTCOM clients and partners * Experienced with IBM Big Data solutions and unstructured data ETL * Knowledge includes ETL methodology, API development, R and Python * Knowledge of one or more of the following products (EIA/ANB, Watson API's, DataStage) is desired Kforce is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.*Compensation Type:*Hours
          (USA-FL-Tampa) Lead Architect   
Kforce has a client that is seeking a Lead Architect in Tampa, FL.Summary:The ideal candidate is ultimately responsible for the delivery of an integrated analytic solution to support CENTCOM J39. The candidate directs technical members of the project team as well as regularly interacts with CENTCOM clients and partners in the IO space. In conjunction with the IO SME, this person will keep the project on schedule and be prepared to address any schedule, performance or cost issues.The Chief Architect leads the design and development efforts, provides technical leadership, applying new technologies, developing and enhancing solutions that drive their clients' missions and transform their IT and Business environments. * Experienced in HDFS, DB2/TM1, Company Big Data tools, and Java/C experience * Extensive experience with very large data repositories (terabyte scale or larger) * Deep understanding of data warehouse approaches, industry standards and industry best practices Kforce is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.*Compensation Type:*Hours
          HR Professionals Say These Big Data Jobs are in High Demand   

Brands in every industry are utilizing big data to improve efficiency and deliver higher quality solutions. However, big data logistics can’t be entirely automated. They rely heavily on the input and expertise of data scientists. Jeanne Harris, one of the senior executives for Accenture Institute for High Performance, has stated that data analytics is an […]

The post HR Professionals Say These Big Data Jobs are in High Demand appeared first on SmartData Collective.


          How Human Centered Design and Big Data Are Merging in 2017   

In the late 1990s, most web services used cookie cutter approaches to solve user needs. These templated solutions prevailed for nearly two decades, before they began focusing on human centered design. Human centered design focuses on the needs of the final users. Developers create custom solutions that are tailored to each user. Despite the obvious […]

The post How Human Centered Design and Big Data Are Merging in 2017 appeared first on SmartData Collective.


          (USA-FL-Tampa) Senior Digital Product Marketing Manager for Cloud Platform – LOB and Customer Solutions - Redwood Shores, CA   
Development and implementation of business plans, marketing strategy, and forecasts for a product/service or vertical market. As a product analyst you will participate in every stage of the product life cycle to ensure product meets the needs of users. Drive the implementation of programs in support of the marketing strategy, business plans, and forecasts for assigned product lines. Maintains current status of customer specifications for existing and future products. Identify, evaluate, and recommend marketing opportunities in support of product line objectives. Drive product functionality delivering high quality product documentation. Lead products through scheduled release assisting others to manage commitments and resources. Measure and report progress and review deliverables. Leading contributor individually and as a team member, providing direction and mentoring to others. Work is non-routine and very complex, involving the application of advanced technical/business skills in area of specialization. Demonstrated product/project management experience. Ability to work with Executives. Strong interpersonal skills. Excellent written and verbal communication. Demonstrated product/project management experience. Creative, energetic, and enthusiastic. Team player. Experience using or implementing Oracle, SAP, or Peoplesoft applications. Basic HTML and web technology skills a plus. 8 years technology/industry experience and BA/BS/MBA degree. *Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans status or any other characteristic protected by law.* Areyou passionate about Oracle s strategy to lead in the cloud-first world? Do youlove to compete? Do you thrive in dynamic environments where the marketlandscape is constantly changing and you need to anticipate competitors moves?Are you excited by the opportunity to make Oracle THE #1 cloud platform in themarket? TheGlobal Oracle Cloud GTM product marketing team is looking for*Senior Digital Product Marketing Manager*to helpshape our GTM efforts around the*overallCloud Platform*. This includes cross-pillar messaging, development ofpersonas, buyer-driven solutions for LOB and SMBs, online campaigns, and go tomarket efforts relative to sales enablement. This role will be mostly focusedon leading our*customer-orientedmarketing for the Cloud Platform*including customer case studies,accelerating references, enabling the field for reference-selling, customermomentum releases, developing customer content i.e. videos, blogs, solutionbriefs, e-books across Cloud PaaS and IaaS. Looking for someone who works wellcollaboratively with key stakeholders across corporate marketing, PR, productdevelopment and field sales teams across multiple product groups includingAnalytics, Database, Middleware and Infrastructure. In addition we ll leveragethis role for driving business LOB content for cross-over solutions in SaaS andPaaS./ / As a manager in product marketing, you will be responsible for the following: Leadingthe development of Cloud GTM marketing specifications for Cloud Platform especially focusing on customer case studies and customer content Managemarketing activities within the context of the overall corporate plan to meetcompany objectives. Lead buyer driven campaigns formultiple solutions across pillars includes SMB, IT and LOB personas Monitorthe development of business plans and business results. Managelifecycles and product positioning in the market place. Identifycustomer needs, oversee market research, and monitor the markettrends/competitive landscape especially focused on AWS and MSFT Azure Implementand monitor field sales plays, integrated campaigns. Actas a technical advisor for specific product(s) including public speaking. Initiateand foster relationships with development, sales, and other groups to developnew products or enhance functionality of existing product(s) or productline(s). Ensuresuccessful product releases based on corporate priorities. Directsand ensures the implementation of operational policies through subordinatemanagers. Interacts internally and externally with executive managementinvolving negotiation of difficult matters to influence policy. Building developer focused contentto help customers understand Oracle vs. competitor s offerings Defining and creating competecontent BOM including use cases, detailed personas, key themes, customerlifecycle messaging, differentiated value propositions and ROI, fieldcommunications and sales enablement readiness in partnership with other productmarketing managers, product management and field sales teams. Partnering with members of otherteams within the marketing org, including advertising, digital, communication,integrated marketing and product marketing teams to put compete marketing plansinto action Key Qualifications: Experienced in leadinggo-to-market for enterprise software and emerging styles of GTM for Cloud-basedpackaged business solutions. Understanding of Cloud PaaSand/or IaaS technologies related to one or multiple areas such as Analytics,Database, Application Development, Security, and Integration Middleware. Understand key customer trends& use cases such as dev/test, micro services, SaaS integration, big data analytics,API Management, PaaS for SaaS Experience working with customerreferences, case studies, and also direct customer engagement types of roles. Experience with or knowledge ofOracle s Cloud offerings. Hands-on experience with our technology andexperience or desire to learn and go deep our competitors products andservices. The ability to work across teams(engineering, marketing, sales and services) to ensure we create and promotethe right competitive messages through the right vehicles Demonstrated bias for action anddriving results At least 3 to 5 years experience in roles such as marketing or sales, evangelism, program management,etc. within the hi-tech industry The candidate should also have: B.A./B.S in Computer Science or Engineering(preferred), Marketing or related field Preferably, an M.B.A (orequivalent in related work experience) Local candidate to theCalifornia Bay Area preferred Redwood Shores HQ Recruiter Contact: Erin Smith erin.d.smith@oracle.com **Job:** **Marketing* **Organization:** **Oracle* **Title:** *Senior Digital Product Marketing Manager for Cloud Platform – LOB and Customer Solutions - Redwood Shores, CA* **Location:** *United States* **Requisition ID:** *17000RDD*
          (USA-FL-Tampa Bay) Senior Software Developer (Java/Hadoop/Spark)   
As a Senior Software Developer, you will be hands-on, and will be expected to bring new and fresh ideas to the company and its solutions. One will be directly responsible for supporting product innovation and contributing to recommendations to employ leading-edge technology on behalf of corporate needs. The position requires a high degree of expertise in current web and database technologies and competency to work on the most complex projects. This position works on broad, highly-visible, strategic software development projects in an extremely complex and evolving technical and business environment. Back-end development experience with Java, Hadoop, SPARK and other big data applications is required though w will train on Hadoop and Spark as needed. Define and build applications and systems to support near-term competitive products and services as well as long-term business needs. Key responsibilities include hands-on software development for prototypes and proofs of concept. Job Responsibilities + Analyze, design, develop, test, implement and document software applications and business systems of the highest complexity. + Prototype software components and incorporate reusable assets into the application design. + In collaboration with Architect Leaders, and as a result of a high degree of business engagement, develop plans for building application solutions and environments that address the company's business and technological strategies and are conducive to quick, inexpensive systems delivery. + Define, document, and implement conceptual designs consisting of data strategy, business processes, application interfaces, and technology solutions. + Define and implement a structure and design framework including applications design; end-user environment; hardware, software and network environment; and methodologies and standards. + Train, coach, and share technical and business knowledge with less experienced staff + Develop software code which is maintainable, easy to use, and satisfies requirements for highly-complex or business-critical applications. + Perform configuration management tasks + Research, assess, and facilitate enhancements and resolution of incidents Role Requirements + Bachelor's degree in computer science or equivalent work experience. + 6 years IT experience with 3 years’ experience in solutions development. + 4 years’ experience developing Java, Hadoop, SPARK and other big data applications and 3 years at a Sr. Software Developer. + In-depth knowledge of software development technology, principles, methods, tools, and practices and industry standards and trends. + Experience in application architecture and design techniques and familiarity with data modeling and relational database techniques. + Extensive, practical experience with building and maintaining large-scale, complex application systems in a team environment. + Comprehensive understanding of the online media business environment, including product offerings, strategic direction and practices, and procedures in order to properly plan and execute appropriate designs. + Industry experience in Market Research, Advertising or Media
          (USA-FL-Tampa) Account Manager, Partner Sales   
Do you want to help eliminate barriers between ideas and business outcomes? We want you to bring your unique experiences and creative ideas to the table. CA Technologies provides software and solutions that help our customers to develop, manage, and secure complex IT environments to increase productivity and enhance competitiveness in their businesses. It’s our aim to encourage global collaboration and results-oriented innovation, while supporting and developing our talented people and our communities. CA Technologies will empower you to drive authentic success, for both the business and yourself in the application economy. Ready to do the best work of your life? As an Account Manager, Partner Sales at CA, you will be responsible for eliminating the barriers between ideas and outcomes for our partners through the introduction and use of CA Technologies solutions to solve their client's business challenges. This will be accomplished by recruiting, strategic planning and enablement of partners. This position must have a thorough understanding of the partner's business and the industry in which they compete, their corresponding IT initiatives and offerings, identifying where and how CA can help grow their business, develop compelling business value proposals and sales plays for our solutions, and engage with both the field, partner and digital sales teams to execute; all which results in growth of PNCV. This position is further responsible for developing and maintaining trusted relationships with the C-suite, senior level decision makers, and other key stakeholders within the Partner account(s). * About the Role:* In this role, you’ll help support CA Technologies charter to transform the IT industry by: * Building strong relationships with partners & customers, identifying customer needs and selling through partners * Delivering value to customers * Making persuasive product presentations * About CA:* CA Technologies is a Fortune 1000 company with a startup mentality – and we’re searching for incredible, bright talent to dominate in the marketplace. Sure, CA has been a leading software company for nearly four decades, with a global customer base that includes the majority of the Fortune 2000 - but what excites us today is the opportunity to redefine the future of our industry in the age of the cloud, mobile, social and big data. We have a daring vision and a powerful, expanding solution set that helps the world’s most successful companies realize their boldest objectives. For more information, visit www.CA.com/innovation * How You’ll Stand Out:* The perfect candidate for this role will have a demonstrated record of success in positions of increasing responsibility over the course of their career. She/he will have an outstanding track record and reputation for achieving sales goals by selling enterprise management software solutions. They will be sharp, professionally aggressive, and love to hunt. An ideal background will include: * Typically 5 years of experience in enterprise software sales * Experience working with partners to uncover and close new opportunities * Demonstrated Ability to close net new logos * Expertise working with partners to drive new business * More About Working at CA:* CA has earned scores of global Workplace Excellence awards in the last few years – and there’s a reason for that. Here you’ll have the opportunity to explore flexible work arrangements, partner with our impressive customer set, and enjoy a competitive compensation package –all while pushing the boundaries of what’s “possible” by collaborating with a diverse team of global innovators. In short? CA’s fun, diverse, fast-paced culture have put us on the map as one of the best employers in IT. For more information, visit www.CA.com/careers. * A Great CA Employee:* * Takes smart risks * Exhibits courage * Is a “driver” * Navigates ambiguity * Embraces diversity * Communicates openly * Doesn’t take themselves too seriously * Likes to laugh * Loves a good challenge * Is resilient when faced with setbacks * Teams well with others * Insists on quality results * Is forward-thinking * Adapts easily to change * Solves problems creatively If you want to fulfill your potential, be acknowledged for your achievements, and be given autonomy to make decisions for your business and customers; if you want to work with a company that respects you as an individual - recognizing both your needs at work and your responsibilities outside of it - then CA Technologies is where you belong. At CA Technologies your passion and expertise can directly impact the business and you’ll help offer our customers practical approaches to delivering new, innovative services and value through IT. We offer competitive salary, company-sponsored premium Medical/Prescription & Dental Plans, company-paid Holidays, Vacation, Anniversary Service and Sick Days, 401(k) Plan, Education/Training Reimbursement, Charitable Gift Program, Adoption Assistance Program. Learn more about CA Technologies and this opportunity now at http://ca.com/careers We and all of our subsidiaries are equal opportunity employers. As such, it is our corporate policy to fill positions with qualified candidates regardless of the candidate’s race, color, sex, age, religion, ancestry, national origin, citizenship status, marital status, sexual orientation, gender identity, genetic information, disability, pregnancy, military status, veteran status or any other protected group status. / *Note to Recruiters and Placement Agencies*: We do not accept unsolicited agency resumes. Please do not forward unsolicited agency resumes to our website or to any of our employees. We will not pay fees to any third party agency or firm and will not be responsible for any agency fees associated with unsolicited resumes. Unsolicited resumes received will be considered our property and will be processed accordingly./ / If you require an accommodation with the online application process, please contact Talent Acquisition at 1-800-454-3788./ EOE/Min/Women/Veterans/Disabled
          (USA-FL-Tampa) Big Data Architect   
Job Description The Big Data Architect will have progressive experience in data management, data migration, data warehousing, business intelligence and data modeling with a deep knowledge of enterprise data architecture concepts. Experience designing and building mission critical/high volume transaction and highly scalable systems across globally distributed data centers. The Big Data Architect will have h ands-on software development experience with Big Data technologies such as Hadoop, Splunk, Cassandra, BigQuery, MapReduce, Impala, Redshift, Kinesis or Postgre SQL. The Big Data Architect experience in enterprise data modeling, data normalization, key/value pair modeling, sue case modeling, business rule design and storing metadata/data dictionaries within a formal modeling tool Education + Baccalaureate Degree in Computer Science, Computer Information Systems, Business Administration, Mathematics, or a related field. Relevant experience will be considered in lieu of degree. + Relevant certifications and licenses will be considered. Qualifications + 15+ years of big data and relevant engineering experience while meeting expertise requirements + Must have good communication skills, shows tact, effective listening skills, follow through and anticipates questions while being prepared with answers or follow-up. + The ability to work independently with little to no supervision researching new technologies or comparing technologies to meet the customers need while providing an unbiased opinion. + Candidate must be a team player and be able to follow processes and procedures. + Self-disciplined, self-starter, professional who can successfully bring projects to closure with minimum direction, guidance and oversight. \#dpost #cjpost #ARMA
          Big Data Engineer with Data Analytics   
Kubrick Group - London - to 2 years' commercial experience after graduating? Want to embark on a career in one of the most lucrative sectors, Data Analytics and Big... Data Engineering? If so, then we have just the solution for you! The secret is out and the mad rush is on to leverage data analytics tools...
          Big Data Developer - Scala, Hadoop, HortonWorks, Spark, HDFS   
iKas International Ltd - London - Big Data Developer - Scala, Hadoop, HortonWorks, Spark, HDFS A leading financial organisation is currently seeking to hire an experienced... Big Data Developer to join a team of elite technologists, and help with the design and development of Big Data solutions that will form the...
          Lead Data Engineer - Big Data - Banking - London   
Twenty Recruitment Group - London - A tier-1 bank based in central London are building a Centre of Excellence which is responsible for adopting innovative, cutting edge data... technologies, tool sets and methodologies in order to leverage the business's data assets to solve real business problems. You'll be working...
          Integration Engineer for Ericsson Expert Analytics - Ericsson - Piscataway, NJ   
This is an exciting opportunity to join a Centre of Excellence team working on Big Data, Business Intelligence and Customer Experience Management projects.
From Ericsson - Sat, 27 May 2017 07:15:00 GMT - View all Piscataway, NJ jobs
          E2E Solution Architect for Ericsson Expert Analytics - Ericsson - Piscataway, NJ   
This is an exciting opportunity to join a Centre of Excellence team working on Big Data, Business Intelligence and Customer Experience Management projects.
From Ericsson - Fri, 26 May 2017 07:13:44 GMT - View all Piscataway, NJ jobs
          Big Data Junior Developer - FastConnect - Purchase, NY   
Experience in Hadoop (Cloudera, HortonWorks and/or MapR) – Architecture, Deployment and Development. Whatever your degree/discipline, our Graduate Program will...
From FastConnect - Mon, 24 Apr 2017 18:13:03 GMT - View all Purchase, NY jobs
          Python, Sql Developer (Full time & permanent position) - Nityo Infotech - Jersey City, NJ   
GUI - struts, spring, javascript, Big Data -Hadoop, mapR). Python, SQL Developer*.... $130,000 a year
From Indeed - Mon, 12 Jun 2017 22:02:17 GMT - View all Jersey City, NJ jobs
          Product Manager - XDuce - Texas   
Experience working on popular Hadoop distribution platforms like Cloudera, HortonWorks, and / or MapR. Big Data Admin....
From XDuce - Tue, 02 May 2017 03:35:17 GMT - View all Texas jobs
          Data Architect, Big Data - BCD Travel Corporate - United States   
MapR experience preferred. The solutions will need to consider the full enterprise data hub platform including, but not limited to MapR, Data Virtualization and...
From BCD Travel - Wed, 28 Jun 2017 02:38:07 GMT - View all United States jobs
          Big Data Architect - Financial Services - IBM - United States   
4 years of experience in the Hadoop platform (such as Cloudera, Hortonworks, MapR, and/or IBM BigInsights). Is data your jam?...
From IBM - Fri, 16 Jun 2017 06:00:51 GMT - View all United States jobs
          Principal Consultant - Neudesic LLC - United States   
Microsoft, Tableau, AWS and Hadoop (Hortonworks, Claudera, MapR, etc.), certifications a plus. Our Business Intelligence and Big Data capability is comprised of...
From Neudesic LLC - Fri, 09 Jun 2017 21:24:50 GMT - View all United States jobs
          Manager - Incedo - United States   
Big Data Management and Predictive Analytics Practice, Statistical Modeling and Predictive Analytics, Hadoop, Cloudera, Horton Works, MapR....
From Incedo - Wed, 24 May 2017 20:30:55 GMT - View all United States jobs
          Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy   
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
author: Cathy O'Neil
name: Hester
average rating: 3.90
book published: 2016
rating: 0
read at:
date added: 2017/05/22
shelves: to-read
review:


          Digital financial inclusion: what works and what’s next    


Over one billion women in the world do not have access to financial services. Having access to a transaction account is a first step for financial freedom and for women to take charge of their lives. 

Women are an underutilized resource in development. Not having access prevents women from having an equal footing in society. Financial inclusion can unleash enormous potential for economic development.

The World Bank’s World Development Report on gender estimated income losses due to women being excluded from the world of work at 10%-37% of GDP across all regions. Research by the World Bank Group, the IMF, the OECD, and private sector studies show that billions can be added to global GDP by advancing women's equality. 

Digital technologies are extending access to finance to millions of people, including women. This is incredibly exciting and the world is placing high stakes on digital technologies as a principal way to bring the 2 billion unbanked adults into the formal and regulated financial system.

It’s much easier today to save, make payments, access credit, and obtain insurance, all of which helps people manage day-to-day expenses, make long-term plans and handle unexpected emergencies.

In 2016, the G20 issued a report led by the World Bank Group and the People’s Bank of China – the High Level Principles for Digital Financial Inclusion - which provided eight recommendations for countries to encourage financial inclusion through digital technologies. A few weeks ago, the G20 finance ministers endorsed a follow-up report which profiles what countries have done in line with these recommendations.

Countries are taking different approaches. For example, Brazil, Mexico and Turkey have been digitizing government-to-person payments (salaries, welfare, etc.), while India has invested heavily in building critical digital infrastructure, including creating national digital IDs.

To mitigate the risk of expanded digital access, Ghana, for example, is piloting new ways for insurance providers to include mobile phones.

Likewise, countries are adjusting their legal and regulatory frameworks while ensuring a fair and just level-playing field. They are also increasingly employing tiered-regulation and customer-due-diligence approaches to promote financial inclusion, while conforming to anti-money laundering/combatting-terrorism financing requirements.  China, Mexico and Tanzania are such examples.

The speed and breadth of innovation in digital financial technologies is fascinating. But these new possibilities create new expectations.

Essentially, countries must be nimble and be able to adapt quickly to keep up with this rapid pace of change to leverage technology for the greater good.  

Governments must lead the way. They must create space and give the industry a green light to innovate.  They also need to coordinate across relevant agencies, including social welfare agencies that interact with people not in the formal financial system.

At the same time, regulators need to acquire better digital tools and look for new ways to foster fintech innovation, from pilot programs to greater collaboration with the industry.  Also, big data requires more sophisticated and automated systems that can provide real-time monitoring and analysis of financial activities.

Governments also must promote interoperable open technology systems that work with one another so that everyone can participate in the financial digital technology regardless of what particular phone or service they use.

Most importantly, governments must prioritize the creation of national digital IDs, while addressing legitimate privacy and data security concerns.

The World Bank Group already works with national authorities to help them develop the regulatory framework which addresses opportunities and risks created by fintech  in line with global standard-setting guidelines on payments and financial inclusion.

Developing countries profiled in this report are among the 25 countries where over 70% of the unbanked people live. They have been identified as priority countries in the World Bank Group’s Universal Financial Access 2020 initiative, whose goal is to provide financial access to adults currently left out of the formal financial system. If these countries make great strides toward digital financial inclusion, the world will be so much closer to reaching this goal.


          Web-Scale Converged Infrastructure Designed to Simplify IT   
Download this resource to key into the specs of one hyper-converged infrastructure, and discover why its implementation is ideal for supporting multiple, virtual workloads such as VDI, private cloud, database, OLTP and data warehouse as well as virtualized big data deployments. Published by: Dell EMC and Nutanix
          Jorge Coronado: ILLOWP – Illo, illo, illo otro WP desactualizado…   
A Jorge Websec la experiencia de WordPressa le supo a poco y ahora ha realizado un sistema automático para auditar masivamente Internet. Localizando las webs y clasificándolas por gestor de contenido. Un sistema de bots automatizados que al detectar un WP buscan su versión, plugins, usuarios y themes. Creando un Big Data que puede servir para muchas cosas. Ese Big Data generado es lo que Jorge Coronado nos va a contar en WordCamp de Sevilla 2016. ¿Cuantos WordPress desactualizados hay en España? ¿Cuantos son vulnerables? ¿Qué plugins son más usados y en qué zonas? Todo esto y más podrás saber en su ponencia.

Presentation Slides »


          (Senior) Consultant Big Data Management - English - Deloitte - Noord-Holland   
You completed a relevant Master of Arts / Master of Science (like Computer Science, Information Science or Technical Business Administration)....
Van Deloitte - Tue, 06 Jun 2017 08:15:43 GMT - Toon alle vacatures in Noord-Holland
          COL-ESPECIALISTA DATOS BIG DATA - Telefonica - Bogotá, Cundinamarca   
(Aplica para empleados(as) directos(as) y temporales). ESPECIALISTA DATOS BIG DATA....
De Telefónica - Fri, 30 Jun 2017 13:16:48 GMT - Ver todos: empleos en Bogotá, Cundinamarca
          COL - EJECUTIVO VENTA ESPECIALISTA BIG DATA - Telefonica - Bogotá, Cundinamarca   
(Aplica para empleados(as) directos(as) y temporales). Profesional Desarrollo de Negocio / Producto Big Data....
De Telefónica - Fri, 30 Jun 2017 13:16:47 GMT - Ver todos: empleos en Bogotá, Cundinamarca
          COL - Profesional Desarrollo de Negocio / Producto Big Data - Telefonica - Bogotá, Cundinamarca   
(Aplica para empleados(as) directos(as) y temporales). Profesional Desarrollo de Negocio / Producto Big Data....
De Telefónica - Fri, 30 Jun 2017 13:16:47 GMT - Ver todos: empleos en Bogotá, Cundinamarca
          Dostęp do własnego genomu online?   
Chciałam, aby każdy człowiek miał dostęp do swojej informacji genetycznej. By w każdej chwili mógł ją przejrzeć na swoim profilu w internecie. Połączenie biotechnologii i big data, automatycznej analizy danych milionów ludzi, stworzy nową jakość w walce z chorobami - mówiła Anne Wojcicki w wywiadzie dla brytyjskiej edycji miesięcznika "Wired".
          Big Data Solution Architect - GlaxoSmithKline - Upper Providence, PA   
Accountable for Solution Design and Template Quality covering:. Experience designing and building complex, high performance platforms to support Big Data...
From GlaxoSmithKline - Sun, 11 Jun 2017 06:59:54 GMT - View all Upper Providence, PA jobs
          5 Simple Tips To Help You Survive The 4th Industrial Revolution   
We are at the beginning of a new industrial revolution. This 4th industrial revolution will be defined by intelligent machines and algorithms that are fed by big data. In this post I look at some simple ways everyone can prepare for, and better survive, this transformation.
          Sales and Marketing   
TX-Houston, Who are we: We are a 25-year old highly respected global company specializing in state-of-the-art fuels and lubricants testing services which is our core business. Developing are also developing new software based products in the areas of Energy Efficiency, Emissions and Pollution Controls. Our advanced big data analysis algorithms create money saving solutions for all types of ships. Preference:
          Data Engineer with Scala/Spark and Java - Comtech LLC - San Jose, CA   
Job Description Primary Skills: Big Data experience 8+ years exp in Java, Python and Scala With Spark and Machine Learning (3+) Data mining, Data analysis
From Comtech LLC - Fri, 23 Jun 2017 03:10:08 GMT - View all San Jose, CA jobs
          Big Data Architect   

          Big Data Solution Architect - GlaxoSmithKline - Upper Providence, PA   
If you require an accommodation or other assistance to apply for a job at GSK, please contact the GSK HR Service Centre at 1-877-694-7547 (US Toll Free) or +1...
From GlaxoSmithKline - Sun, 11 Jun 2017 06:59:54 GMT - View all Upper Providence, PA jobs
          Big Data Solution Architect - GlaxoSmithKline - Upper Providence, PA   
In the absence of such written authorization being obtained any actions undertaken by the employment business/agency shall be deemed to have been performed...
From GlaxoSmithKline - Sun, 11 Jun 2017 06:59:54 GMT - View all Upper Providence, PA jobs
          Big Data. Le Breton Neovia s'associe au Girondin Ceva Santé Animale   
En créant la start-up AppliFarm, les géants de la nutrition et de la santé animale Neovia et Ceva Santé Animale comptent développer des applications avec Evolution, BCEL Ouest et GDS Bretagne Eilyps, Cogedis et Adisseo. L'enjeu ? Exploiter les millions de données amenées à être analysées sur chaque exploitation agricole. Neovia (7.700 salariés et 1,6 milliard d'euros) unit ses compétences à...
          Digital Business Integration Manager - Big Data - Accenture - Canada   
Choose Accenture, and make delivering innovative work part of your extraordinary career. Join Accenture and help transform leading organizations and communities...
From Accenture - Tue, 13 Jun 2017 02:38:27 GMT - View all Canada jobs
          Software Engineer, Google Cloud Platform   
CA-San Diego, Main Duties and Responsibilities: - Play an integral role in building company cloud computing platform leveraging Google Cloud Platform (GCP). - Take ownership of high-profile GCP software engineering and Big Data initiatives at a publicly-traded, industry-leading company. - Develop and maintain server side code for cloud based applications. Skills and Requirements: - 7 - 10+ years of software eng
          COL-ESPECIALISTA DATOS BIG DATA - Telefonica - Bogotá, Cundinamarca   
(Aplica para empleados(as) directos(as) y temporales). ESPECIALISTA DATOS BIG DATA....
De Telefónica - Fri, 30 Jun 2017 13:16:48 GMT - Ver todos: empleos en Bogotá, Cundinamarca
          COL - EJECUTIVO VENTA ESPECIALISTA BIG DATA - Telefonica - Bogotá, Cundinamarca   
(Aplica para empleados(as) directos(as) y temporales). Profesional Desarrollo de Negocio / Producto Big Data....
De Telefónica - Fri, 30 Jun 2017 13:16:47 GMT - Ver todos: empleos en Bogotá, Cundinamarca
          COL - Profesional Desarrollo de Negocio / Producto Big Data - Telefonica - Bogotá, Cundinamarca   
(Aplica para empleados(as) directos(as) y temporales). Profesional Desarrollo de Negocio / Producto Big Data....
De Telefónica - Fri, 30 Jun 2017 13:16:47 GMT - Ver todos: empleos en Bogotá, Cundinamarca
          Avast! Anti Virus Support for 64-bit Windows   
It comes as quite a relief that the Avast! anti virus support now fully renders help to 64-bit Windows platform. This has been enabled in the Avast! Home and Professional versions. This has been seen as a big stride by ALWIL Software as it had been closely watching the proceedings and following up on the whole issue of taking up this platform by Avast!. The Windows XP 64-bit version has an extensive range of supporting up to 32 GB of RAM and 16 TB of virtual memory. A lot of antivirus support software cannot usually support the 64-bit Windows because of its massive storage. The Windows 64-bit version can run applications at great speed when engaged to work with big data sets.
These applications have the propensity to preload a lot more data into the virtual memory enabling quicker access by the 64-bit extensions of the processor. This minimizes the time taken to insert data into virtual memory and also in writing data storage devices. In this way, applications can run a lot quicker with greater control.
Regular antivirus support applications in 32-bit versions of Windows do not run on the 64-bit Windows. This is because the 32-bit versions depend on the 32-bit kernel drivers. However, the Avast! anti virus support application changes this equation by running traditional 64-bit drivers and still deliver the best mode of protection and safety as achieved in the 32-bit Windows. Both the 32-bit and the 64-bit versions get installed in the same way.
          Customer Service Technician   
OR-White City, CenturyLink (NYSE: CTL) is a global communications and IT services company focused on connecting its customers to the power of the digital world. CenturyLink offers network and data systems management, big data analytics, managed security services, hosting, cloud, and IT consulting services. The company provides broadband, voice, video, advanced data and managed network services over a robust 265,
          COL-ESPECIALISTA DATOS BIG DATA - Telefonica - Bogotá, Cundinamarca   
Machine Learning (predicción y segmenaión) Web Profiling, Web mining, Tex mining, Análisis de redes (SNA) y optimización dinámica. ESPECIALISTA DATOS BIG DATA....
De Telefónica - Fri, 30 Jun 2017 13:16:48 GMT - Ver todos: empleos en Bogotá, Cundinamarca
          Senior Systems Engineer/ Big Data - Dell - Remote   
Greenplum, MPP databases, Data Lake strategy, heterogeneous data management, BI/DW, visualization etc). 5+ years of experience with deep understanding in...
From Dell - Wed, 03 May 2017 23:31:25 GMT - View all Remote jobs
          Consultant, Service Delivery - Big Data - Dell - Remote   
Manage Greenplum DB or Postgres including design, capacity planning, cluster setup, performance tuning, and ongoing monitoring....
From Dell - Tue, 02 May 2017 17:26:55 GMT - View all Remote jobs
          Senior Systems Engineer/ Big Data - Virtustream Inc. - United States   
Greenplum, MPP databases, Data Lake strategy, heterogeneous data management, BI/DW, visualization etc). 5+ years of experience with deep understanding in...
From Virtustream Inc. - Wed, 03 May 2017 20:56:37 GMT - View all United States jobs
          The Renaissance of ERP: Leveraging a Digital Core for Increased Innovation   
This white paper explores how you can boost innovation by upgrading core ERP systems and extending those capabilities in the cloud. Find out how you can improve your company's ability to manage key business processes with a digital ERP strategy and take advantage of evolving opportunities from big data, IoT, mobility and more. Published by: SAP
          Sr Software Engineer ( Big Data, NoSQL, distributed systems ) - Stride Search - Los Altos, CA   
Experience with text search platforms, machine learning platforms. Mastery over Linux system internals, ability to troubleshoot performance problems using tools...
From Stride Search - Tue, 04 Apr 2017 06:25:16 GMT - View all Los Altos, CA jobs
          Por qué dejar sin final a ‘Sense8’ podría ser perjudicial para la marca Netflix   

netflix

Hace unos días la vicepresidenta de producción propia de Netflix Cindy Holland emitía un comunicado con respecto a la cancelación de ‘Sense8’:«Tras 23 episodios, 16 ciudades y 13 países, la historia del cluster de Sense8 llega a su fin». Así se sentenciaba a una de las series que más y mejor celebra la diversidad, la igualdad y la fraternidad entre personas.

Pero no, este texto no va por ahí. El debate sobre la responsabilidad moral y/o social de proveedores y productores de contenidos es igual o más relevante que la mera cuestión mercantil. Sin embargo, y aunque la esencia de ‘Sense8’ sea de una esperanza y gozo remarcables en el panorama actual, voy a centrar esta reflexión en el aspecto más comercial de esta decisión; en por qué esta decisión puede dañar no sólo la imagen sino el crecimiento de la plataforma.

Cuestión de beneficios

El dinero, los suscriptores y, en definitiva, los beneficios es la medida de éxito más tangible a la que se puede agarrar Netflix a la hora de valorar qué renueva y qué no. Es por eso que se entienden las recientes cancelaciones de alguna de sus producciones más caras como ‘The Get Down’, cuyo presupuesto era de 12 millones por episodio, aunque las malas lenguas dicen que a Baz Luhrmann se le fue la mano y se pasó de los 200 millones en total (16,6 por capítulo).

También podemos incluir las cancelaciones de títulos como ‘Bloodline’ (de la que han producido una tercera y última temporada de cierre), ‘Marco Polo’, ambas con un presupuesto de 9 millones por capítulo, el doble que éxitos de Netflix como ‘House of Cards’, ‘Orange is the New Black’ o seguramente ‘Stranger Things’, de la que no se conoce el presupuesto total pero cuyos creadores han sido muy abiertos contando que era ridículo y eso les empujó a ser más creativos.

A esos precios sólo quedaban ‘Sense8’, que había doblado su presupuesto en la segunda temporada a 9 millones por capítulo, y ‘The Crown’, la única que sigue en pie por el momento. Precisamente el éxito de ‘Stranger things’ es buen ejemplo para ilustrar las palabras que Reed Hastings hacía un día antes de la noticia de cancelación de ‘Sense8'.

Nuestro ratio de éxitos está demasiado alto ahora mismo. Hemos cancelado muy pocas series. Constantemente empujo al equipo de contenidos con que tenemos que tomar más riesgos e intentar cosas más locas. Porque deberíamos tener un índice de cancelación más alto: cuando te aventuras así es cuando das con grandes ganadores que son éxitos increíbles, como ‘Por trece razones’. Nos sorprendió. Es una serie genial pero no vimos cuánto iba a gustar.

Eso de tener demasiados éxitos puede sonar arrogante pero lo que expone Hastings confirma una visión lógica en una empresa que quiere ganar dinero: los riesgos pueden llegar a ser muy rentables, y por lo que cuesta un ‘Sense8’ pueden arriesgarse y ganar con muchos otros títulos. Ya sabemos que Netflix no revela datos de audiencia, pero las cancelaciones hablan por sí solas en cuanto a esta (y otras) cancelación(es).

Mucho se ha escrito sobre los algoritmos y el big data que influye en la toma de decisiones de la compañía pero el propio Hastings confesaba que el grueso lo compone la mezcla entre el crecimiento de suscriptores y el número de espectadores. «Se trata de cuánta gente lo ve y esos dos factores están muy unidos».

Así que está claro por qué ‘Sense8’ ha sido cancelada: No es rentable. Es una conclusión fácil que se sustenta en las palabras de Hastings además de todos los problemas de negociación de contratos que ya tuvieron previamente a la producción de la segunda temporada. Y si no es rentable, es comprensible la cancelación. Pero, ¿es coherente dejar a la serie sin final?

Un mensaje dañino innecesario

sense8

Como espectadores estamos acostumbrados a que las cadenas cancelen nuestras series favoritas, muchas de ellas sin un final cerrado de la historia. El calendario de producción y la naturaleza de parrilla programada dificultan (aunque con previsión sea evitable) que las cadenas se planteen producir un cierre, algo que además (viéndolo con las gafas del jefe financiero) sería una inversión considerablemente inútil si sólo es para contentar a un grupo de seguidores y, como mucho, limpiar un poco de imagen.

Sin embargo, es algo que se antoja muy distinto cuando pensamos en Netflix y su modelo: una plataforma de streaming que promueve el binge-watching y cuyo patrón de producción se basa en ofrecer al cliente al menos un título que le haga mantener un mes más su suscripción. Con esto en mente, ¿qué sentido tiene cancelar sin un cierre uno de sus Netflix Originals? Y éste es de verdad, no de esos originales que surgen de la compra de derechos de distribución internacional.

Esta decisión envía un mensaje muy poco atractivo para el suscriptor de su servicio, que puede temer empezar una serie con miedo a que no tenga final, echando por tierra los esfuerzos de su estrategia de marketing de convertir las novedades en el evento de la semana de su estreno. El asunto toma más relevancia cuando pensamos en la cantidad de ficción que hay ahora a disposición del espectador; un espectador que además se ha adaptado completamente a la televisión a la carta y tiene poder total sobre cómo administra lo que consume.

Lo más grave de todo es que es completamente innecesario. Como comentaba antes, una cadena de televisión al uso tiene un modelo en el que ya no es que no tenga sentido, sino que el plantear una emisión aislada para cerrar un producto cancelado es impensable. En Netflix es todo lo contrario.

Según ha diseñado su plataforma, Netflix tiene libertad total. Libertad para producir episodios de 17 minutos y de 63. Para plantear temporadas de tres capítulos o de diez. Para emitir un capítulo especial o hacer una película. Con un modelo así, ¿qué sentido tiene dañar tu imagen de esta forma? Y peor aún, ¿qué razón encuentran para justificar que ofrezcan en su catálogo una serie propia inacabada?

No tenemos apenas información sobre lo que se esconde detrás de esta decisión más allá de hacer elucubraciones con el asunto del presupuesto, pero si tuviese que basarme sólo en esto diría que Netflix será coherente y producirá un episodio especial para cerrar ‘Sense8’, aunque sea sólo por el pragmatismo de que lo invertido y conseguido hasta ahora no se invalide. Un S02E12 que permita a esta gran serie de las Wachowsky convertirse en ese lugar feliz que recomendar cuando alguien te pregunta qué puede ver en Netflix.

[[actualizacion: {"text":"Actualización 29/06/2017: Netflix ha anunciado que finalmente producirá un episodio de dos horas que cerrará la serie. Podéis leer más en este enlace."}]]

Lectura relacionada | ¿El streaming está matando a la serie de televisión tal y como la conocemos?


          Big Data Solution Architect - GlaxoSmithKline - Upper Providence, PA   
Job Category - Enterprise Architecture:. Designs, documents and publishes target (2-3 yr) enterprise architectures (business, information, application, system...
From GlaxoSmithKline - Sun, 11 Jun 2017 06:59:54 GMT - View all Upper Providence, PA jobs
          Big data investments to top $76B worldwide by 2020   
Technologies are expected to capture, store, manage and analyze large and variable collections of information.
          Hipster Heaven: Competition and Innovation in Western Australia - OpEd, The West Australian   

HIPSTER HEAVEN – COMPETITION AND INNOVATION IN WESTERN AUSTRALIA

The West Australian, 15 June 2017

Hipsters play an underappreciated role in competition policy. The reason for this is simple. Hipsters don’t like the mainstream. They’re not happy unless their clothes are vintage, their bikes are fixies, and their flat whites are served in avocados. If it is mass-produced by a multinational, they won’t touch it.

Indeed, hipsters are the reason we have so many choices when it comes to which alleyway speakeasy to drink in. Across our inner cities, hipsters are the reason why mechanics and car dealers are giving way to cocktail bars and edgy restaurants. Hipsters’ desire for innovation, difference and choice boosts competition.

According to the New York Times, Perth is ‘Hipster Heaven’. So the forces of hipsters should be stronger than ever.

Yet even here, there are signs that competitive pressures aren’t as strong as they should be. On one metric, over half of Australia’s industries are overly concentrated markets. That includes plenty of industries that are important to Western Australia, among them telecommunications, credit unions, cinemas, liquor retailing, pharmacies, hardware, gyms, magazines, newsagents and international airlines.

For all the talk of start-ups and innovation, the rate of new business formation is slower now than it was in the first decade of the twenty-first century. Meanwhile, the pace of mergers continues unabated. The big firms are getting bigger, but the hungry start-ups aren’t snapping at their heels the way they should be.

Megacorps find it easier to work together – and not in a good way. A recent academic study found that big data allowed Perth’s petrol retailers to coordinate the weekly price cycle, driving up margins at the bowser. The researchers concluded that this ‘tacit collusion’ cost motorists around 10 cents a litre.


          DevOps Engineer - GlaxoSmithKline - Upper Providence, PA   
Install and maintain all relevant OS components and utilities. Install and maintain MongoDB components and utilities within the platform. Big Data Experience:....
From GlaxoSmithKline - Fri, 19 May 2017 18:39:00 GMT - View all Upper Providence, PA jobs
          Xcalar Gets $21M For Big Data Analytics   
San Jose-based Xcalar, a developer of big data software for business analytics, has raised $21M in a Series A funding, the company said on Tuesday. The round was led by Khosla Ventures, and also included angels Andy Bechtolsheim and Diane Greene, along with Merus Capital.... (more)
          Never Too Late: If you missed the IPKat Last Week!   
Did the scorching weather last week keep you from keeping up with IP news? Not to worry, the 152nd edition of Never Too Late is here to update you on the IP latest!

The week started with Kat Mark reporting on the breaking news: German Constitutional Court stops implementing legislation for Unitary Patent Package. The signing off of the Unitary Patent package has been halted due to complaint considered not wholly without merit by the highest German Court. More to follow.

Kat Mark also reported on the conference "Innovation and Competition in Life Sciences Law" (pt.1 and pt.II), organised by the Center for Intellectual Property and Competition Law (University of Zurich) and the Center for Life Sciences Law (University of Basel). With its mix of academics, practitioners and industry representatives, the conference provided valuable insight in this field.

Bouncing to the tennis courts, former InternKat Nick brought us an insight on Special K and beyond: tennis brands. The tennis player Kokkinakis known as “Special K” and the famous cereal brand are possibly to be contenders on a Down Under trademark face-off. Who will win the tiebreak?

On to more breaking news, CJEU says that site like The Pirate Bay makes acts of communication to the public. The Court stated that the operators of said platform, playing an essential role in making the works available, are to be considered liable of copyright infringement. Kat Eleonora reports.

In a discourse over threats to competition, possible solutions and the applicability of the concept of “non-rivalrous goods” to data, Kat Neil discussed: The challenge of big data: we ignore it at our professional peril, underlining the great importance of this intangible asset.

Continuing on a very techie week, Katfriend Mirko Brüß reports that a German court orders Google to stop linking to Lumen Database. The Database, which collects and analyzes legal complaints and requests for removal of online materials, was according to the Higher Regional Court of Munich, misused by Google as it was used to provide access to infringing content.

Event report: Trends in the creative digital economy. InternKat Hayleigh participated to this event which explored “Trends in the Creative Digital Economy: Findings from the CREATe Research Programme” at the Digital Catapult Centre in London, combining research presentations and discussions with the launch of the Copyright and Innovation Network.

"Kat in a basket" Oil on canvas, 2008 ca. 
Following on Kat Hayleigh's report of the CREATe event, Kat Nicola tells A Tale of Stability - Business Models in the Creative Industries. She goes into detail on the results of her research presented at the conference, showing that in the new copyright digital era, business models are not changing.

A long quest for lost paintings sprinkled with adversities, court rulings and family ties. Guest Kat Mathilde tells us how a Paris Tribunal supports heir's claim to looted painting. After losing works of art as part of one the many seizures during WWII, one painting reappeared as part of a Parisian exhibition. The race that followed not to let the painting leave and to bring it back to the heirs of the owner is filled with strokes of adventure.

Rocking on towards the weekend, Kiss singer seeks trade mark registration for hand gesture. Guest Kat Mathilde analyses Mr. Simmons' application for the “devil's horns” trademark and contextualizes it with the usage of the very same gesture done in the music world in the past and also as a cultural gesture.


Weekly Roundups: Tuesday Wonders, Sunday Surprises


Image credits: Cecilia Sbrolli

PREVIOUSLY ON NEVER TOO LATE


Never Too Late 151 [week ending on Sunday 10 June] Mozart and Other Pirates I TILTing Perspectives 2017 report (1): The healthcare session I TILTing Perspectives 2017 report (2): The IP session and the Key Note I Application to amend nappy patent not so watertight - IPEC holds nappy patent invalid for added matter and lack of clarity I SugarHero and the Snow Globe Cupcakes - Copyright and Food Videos I Mr Justice Birss introduces the brand new FRAND Injunction in Unwired Planet v Huawei I French Counseil d'État invalidates decrees implementing law on out-of-commerce works I A Tight Squeeze: Matters of Comity and Justiciability I Life as an IP Lawyer: Milan I AIPPI/AIPLA Event: Copyright in a digital age - US and UK perspectives

Never Too Late 150 [week ending on Sunday 4 June] BREAKING: German court makes two (very important) copyright references to the CJEU I Implausibly incredible or just plain insufficient? I Marks misleading the public on the paternity of copyright works are fraudulent - say French Supreme Court I Should the court be indifferent to consumer indifference regarding the mark? I ‘Display At Your Own Risk’: A Tour into ‘Copyright Surrogacy’ I To UPC or not to UPC? That is the question... (Part 1) I Book Review: Patents for Technology Transfer I Event Report: Combat the Copycats

Never Too Late 149 [week ending on Sunday 28 May] IPSoc Event Report: The ever-evolving law on the "communication to the public" right | Nestlé loses yet another KitKat battle | Judge sounds alarm of weakened US patent system, while industry groups start amending Section 101 | BREAKING: Supreme Court limits US patentee's forum shopping capabilities | Shinder, Shinder, Shinder … will you ever be like Tinder? | US Supreme Court uses TC Heartland to blunt key troll tool, but will California welcome the next wave of troll litigation? | Is there copyright in the taste of a cheese? Sensory copyright finally makes its way to CJEU | Big Data, products & processes: being a German patentee in the era of the Rezeptortyrosinkinase decisions | Life as an IP Lawyer: Singapore | Appointed Person issues first appeal decision in a design case | The meaning of "red carpet" in two and three dimensions: from Ancient Greece to Cannes | Judge Alsup driving forward Uber-Waymo trade secret dispute amongst "red flag" disclosure hearings | Monday Miscellany | Friday Fantasies.

Never Too Late 148 [week ending on Sunday 21 May] Book Review: Russell-Clarke and Howe on Industrial Designs I Scope of review by the General Court of decisions by the EUIPO Board of Appeal: the last act in LAGUIOLE I Dining out on trade marks - ZUMA - the own name defence for pets and groundless threats I The popular China copyright monitoring website 101 I Where are the women? Supreme Court hosts London launch of ChIPs with call to action to advance women in tech, law and policy I Br*x*t and brands – out of the EU in 680 days I In memoriam: Adolph Kiefer, Olympic gold medalist, innovator and inventor extraordinaire I Digital copies, exhaustion, and blockchains: lack of legal clarity to be offset by technological advancement and evolving consumption patterns? I German TV show allowed to call right wing politician 'Nazi sl*t', Hamburg court rules I Latest leak reveals that review of EU IP enforcement framework is currently in a deadlock I Sunday Surprises, Around the IP Blogs

          Big Data Solution Architect - GlaxoSmithKline - Upper Providence, PA   
Linux file system management. Experience designing and building complex, high performance platforms to support Big Data ingestion, curation and analytics....
From GlaxoSmithKline - Sun, 11 Jun 2017 06:59:54 GMT - View all Upper Providence, PA jobs
          DevOps Engineer - GlaxoSmithKline - Upper Providence, PA   
The DevOps Engineer will require a deep understanding of key Big Data software applications and operating systems. Big Data Experience:....
From GlaxoSmithKline - Fri, 19 May 2017 18:39:00 GMT - View all Upper Providence, PA jobs
          (USA-VA-Fairfax) Cyber Intelligence Analyst 2/3   
**Cyber Intelligence Analyst 2/3** **Requisition ID: 17015374** **Location\(s\): United States\-Virginia\-Fairfax** **US Citizenship Required for this Position: Yes** **Relocation Assistance: No relocation assistance available** **Travel: Yes, 10 % of the Time** How do cyber terrorists get past the industry’s best? They don’t\. There are too many of us fighting virtual threats, protecting enterprises and entire countries from large\-scale attacks\. From creating a city\-wide wireless network for our first responders, to protecting our nation from cyber threats, to building software\-defined radios that change how our military communicates – our Cyber & Intelligence Mission Solutions \(CIMS\) Division helps life run smoothly and safely\. Our culture is one of excellence; people, teamwork, learning and delivering value to our customers\. Northrop Grumman Mission Systems is looking for Cyber Intelligence Analysts who love to learn and take initiative to continue to push us to the next level in the Washington, D\.C\. Metropolitan Area\. Roles and Responsibilities The Cyber Intelligence Analyst will be responsible for conducting research and evaluating technical and all\-source intelligence data, with specific emphasis on network operations and cyber warfare tactics, techniques, and procedures focused on the threat to networked weapons platforms and US and DoD information networks\. The successful candidates will support the analysis of network events to determine event impact on current operations and will conduct all\-source research to determine advisory capability and intent\. You will support the preparation of assessments and cyber threat profiles of current events based on the sophisticated collection, research and analysis of classified and open source information\. You will be required to develop and maintain analytical procedures to meet evolving mission requirements and maximize operational impact\. As a mission partner, you will be requested to produce high\-quality papers, presentations, recommendations and/or finding for senior US government officials\. Limited local travel to attend meetings may be required by our customer\. Certain positions may offer opportunities for more extensive travel, depending on customer requirements\. This requisition may be filled at a higher grade based on qualifications listed below\. CYBERINTEL NGMSNVMSTR **Basic Qualifications** This requisition may be filled at either a level 2 or a level 3\. • Basic Qualifications for a level 2 are a Bachelor’s Degree in Business, Accounting, Economics, Finance or other related discipline; four \(4\) years of additional financial experience may be considered in lieu of degree; and two \(2\) years’ experience as a financial analyst examining trends and relationships, with demonstrated experience or knowledge of global financial institutions, global financial transactions, and/or global threat finance topics\. • Basic Qualifications for a level 3 are a Bachelor’s Degree in Business, Accounting, Economics, Finance or other related discipline; four \(4\) years of additional financial experience may be considered in lieu of degree; and five \(5\) years’ experience as a financial analyst examining trends and relationships, with demonstrated experience or knowledge of global financial institutions, global financial transactions, and/or global threat finance topics; • Certification in anti\-money laundering, fraud, financial planning, financial advising, financial forensics, and/or auditing; • Capable of creating data\-rich, succinct analytical reports by researching, analyzing, prioritizing, synthesizing and solving problems of big data; • Highly capable user of Microsoft Excel features, to include pivot tables; • Excellent verbal and written communications skills; • Willingness and ability to adapt to changing customer mission requirements as necessary; • Active TS/SCI with Polygraph clearance Preferred Qualifications • Demonstrated experience with customer tools and databases; • Knowledge of emerging technologies in the global financial marketplace; • Experience training and/or briefing others on global financial marketplace topics of interest; • Experience creating Microsoft Access databases; • Foreign language capability Northrop Grumman is committed to hiring and retaining a diverse workforce\. We are proud to be an Equal Opportunity/Affirmative Action Employer, making decisions without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability, or any other protected class\. For our complete EEO/AA and Pay Transparency statement, please visit www\.northropgrumman\.com/EEO\. U\.S\. Citizenship is required for most positions\. **Title:** _Cyber Intelligence Analyst 2/3_ **Location:** _Virginia\-Fairfax_ **Requisition ID:** _17015374_
          Big Data Infrastructure Industry Worldwide Growth and Development   
(EMAILWIRE.COM, July 01, 2017 ) In this report, we analyze the Big Data Infrastructure industry from two aspects. One part is about its production and the other part is about its consumption. In terms of its production, we analyze the production, revenue, gross margin of its main manufacturers and...
          (USA-MD-Woodlawn) Mgr Computer Operations 2   
**Mgr Computer Operations 2** **Requisition ID: 17015156** **Location\(s\): United States\-Maryland\-Woodlawn** **US Citizenship Required for this Position: No** **Relocation Assistance: No relocation assistance available** **Travel: No** work with cutting\-edge technology is driven by something human: the lives our technology protects? If so, Northrop Grumman may be the place for you\. It’s not the systems that drive us: it’s the soldier our systems bring home\. It’s not just the equipment that motivates us: it’s the people our equipment protects\. It’s not the innovation that gets us up in the morning: it’s whom those innovations serve\. We’re united by our work to help people and protect the world\. And that mission makes our team even stronger\. When you join Northrop Grumman, you’ll have the opportunity to connect with coworkers in an environment that’s uniquely caring, diverse, and respectful\. Employees share experiences, insights, perspectives, and creative solutions with some of the best minds in the industry\. We collaborate through integrated product teams, cross\-functional teams, and employee resource groups, while thriving through the support of training and development, mentors and every day coaching, along with extensive health and work/life benefits\. We’re committed to our employees’ professional and personal development and success\. Northrop Grumman recruits top talent with traditional and non\-traditional backgrounds in order to ensure our team is united, connected, skilled, focused and innovative\. An inclusive workplace of people with diverse backgrounds, experiences, and perspectives is the key to our performance\. At Northrop Grumman, we want our employees to bring their whole self to work\. All your different sides are welcome here, as we believe they make our team, our products and our services, that much better\. Northrop Grumman’s Technical Services sector is seeking an Operations Manager to join our team of qualified, diverse individuals in Woodlawn, MD \. Look to a future of excellence by joining a Northrop Grumman team delivering cutting edge technology solutions to our clients\. The qualified applicant will become part of Northrop Grumman's Health Solutions Management division that focuses on Medicare and Medicaid fraud protection for our Federal, state, and local government clients\. This role is crucial to the success of our team\. The successful job candidate must have excellent communication skills and be a detail\-oriented self\-starter\. Will be required to coordinate with internal and external support teams and is skilled at managing databases from design to implementation\. This person will be an individual contributor but will work well within and across teams to accomplish his/her tasks\. The chosen candidate will be a member of a multi\-disciplinary team developing analytical applications for Northrop Grumman’s Healthcare systems\. Basic Qualifications: + ITIL Certification + Bachelor’s Degree in IT or related subject\. + Minimum of five \(5\) years of management experience, managing staff of 5 or more people\. + Minimum of ten \(5\) years of technical experience working in data center, production control environment\. + Minimum of five \(5\) years of experience with subcontract and stakeholder management\. + Experience managing and/or leading IT operations/help desk/support services department\. + Proven written and verbal communications skills a must + Demonstrated experience working as a contractor to a federal client\. + Ability to work in high\-energy, dynamic environment\. + Ability to multi\-task in time\-constrained scenarios\. + Must be able to obtain a Position of Trust designation which requires Legal Permanent Resident status or US Citizenship\. + Knowledge in the supervision of the administration of the following technologies: - J2EE - RDBMS - Virtualization - HW - Networking - Storage Preferred Qualifications + CMS experience + Experienced in subcontract management\. + Knowledge of Fraud, Waste, and Abuse business + Knowledge of the following technical areas: Big Data, Analytics Packages, Business Intelligence, Identity Management + Prior Remedy incident management system use a plus + Knowledge of Agile development methodology a plus Northrop Grumman is committed to hiring and retaining a diverse workforce\. We are proud to be an Equal Opportunity/Affirmative Action Employer, making decisions without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability, or any other protected class\. For our complete EEO/AA and Pay Transparency statement, please visitwww\.northropgrumman\.com/EEO\. U\.S\. Citizenship is required for most positions\. **Title:** _Mgr Computer Operations 2_ **Location:** _Maryland\-Woodlawn_ **Requisition ID:** _17015156_
           موضوعات پايان نامه کارشناسي ارشد هوش مصنوعي،نرم افزار،شبکه    
انجام کليه پروژه هاي دانشجوييدرسراسرايران بيش از 20 پروژه برنامه نويسيوپايان نامه پروپوزال هاي دانشجويي از دپارتمان علوم رايانه دانشگاه هاي کلمبيا هندما*****ي آلمان*سوئد*دانمارک *انگلستان *فيليپين *دبي*ترکيه و... دربانک پروژه پايتخت توسط خودگروه نرم افزاري پايتخت انجام پروژه هاي دانشجويي براي دانشجويانايراني داخل وخارجازکشوررشته کامپيوتر Several suggested student programming projects for computer science majors (undergraduate, masters and PhD. students) from the Network Security Lab at Columbi@انجام پايان نامه وپروپوزال هاي دانشجويي مقاطع تحصيلي کارداني کارشناسي کارشناسي ارشددكترا و....دانشگاه هاي داخل و خارج از کشوررشته کامپيوترنرم افزار*معماري کامپيوتر*هوش مصنوعي و فناوري اطلاعات و.........امنيت شبکه* مخابرات امن *تجارت الکترونيک تحت تمامي زبانها برنامه نويسي خدمات مشاوره اي: مشاوره رايگان انتخاب موضوع پايان نامه - انجام تمامي خدمات مربوط به تهيه پيشنهاديه پايان نامه ( proposal ) مشاوره و تدوين پايان نامه هاي مرتبط با رشته هاي فوقالذکرفناوري اطلاعات* کامپيوترو.. ارائه تمامي فصول مربوط به پايان نامه ، مطابق با جدول زمانبنديتوافقي مشاوره و طراحي پرسش نامه و انجام مصاحبه و تجزيه و تحليل اطلاعات استخراجي با استفاده ازنرم افزارهاي مرتبط و در انتها ارائه مقاله اي علمي – پژوهشيبراي ارائه نشريات معتبر داخلي (علمي-پژوهشي) و خارجي (ISI)IEEE*نگارش، تدوين و اديت مقاله هاي isi براي ارسال به ژورنال هاي معتبر با ايمپکت فاکتور بالا رشته فناوري اطلاعات * گرايش تجارت* الکترونيک -*کارشناسي ارشد درمورد نقش erp و سيستم هاي اطلاعاتي و ريسک در هوش تجاري بررسي انواع چالش‌هاي موجود در رايانش ابري و رايانش توري(Cloud computing amp; Grid computing) شامل مباحث امنيت (Security)، ذخيره‌سازي (Storage)، کارايي (Performance)، دسترس‌پذيري (Availability) و مديريت، تخصيص و زمانبندي منابع (Allocation and Scheduling Resources)، توازن بار(Load Balancing). بررسي انواع الگوريتم‌ها در حوزه‌ي داده‌کاوي (Data Mining)؛ طبقه‌بندي(Classification)، خوشه‌بندي(Clustering)، کشف قوانين انجمني(Association Rules)، پيش‌بيني سري‌زماني(Time Series Prediction)، انتخاب ويژ***** (Feature Selection) و استخراج ويژ***** (Feature Extraction)، کاهش بعد(Dimensionality Reduction)، شخصي سازي نتايج موتورهاي جستجو و داده‌کاوي اطلاعات آنها(Search Engine). بررسي انواع الگوريتم‌ها در حوزه‌ي شبکه‌هاي اجتماعي(Social Network)؛ کشف ساختار(structure Detection ) کشف اجتماعات(Community Detection)، تشخيص اسپم(Spam Filter). بررسي انواع تکنولوژي‌هاي ذخيره داده اي، Sql، NoSql، نگاشت کاهش (MapReduce)، هادوپ(Hadoop)، کار با Big Data. بررسي، مقايسه و بهبود انواع الگوريتم‌هاي مکاشفه‌اي، فرا مکاشفه‌اي و چند هدفه مانند الگوريتم ژنتيک(Genetic Algorithm, MOGA, NSGAII)، الگوريتم ازدحام ذرات(PSO, MOPSO)، الگوريتم مورچگان(Ant Colony)، الگوريتم زنبور عسل(Bee clolony)، الگوريتم رقابت استعماري(ICA)، الگوريتم فرهن***** (Cultural Algorithm)، الگوريتم تکامل تفاضلي(DE). بررسي انواع الگوريتم‌هاي پردازش تصوير(IMAGE PROCESSING)؛ تشخيص چهره(Face Recognation)، قطعه‌بندي تصاوير(Image Segmentation)، فشرده‌سازي تصاوير(Image Compression)، نهان‌نگاري تصاوير(Watermarking). بررسي انواع الگوريتم‌هاي ياد*****ر؛ شبکه‌هاي عصبي (ANFIS, ANN)، شبکه‌هاي بيزين(Bayesian Network)، ماشين بردار پشتيبان(SVM). استفاده از نرم‌افزار‌هاي Visual Studio، متلب(Matlab)، وکا(Weka)، رپيدماينر(Rapidminer)، Clementine، کلودسيم(Cloudsim). استفاده از زبان‌هاي Python, Java, C, C#, C++, DBMS, MySql, Sql Server, VB.NET, PHP تدوين پروپوزال، اجراي پايان نامه و طرح هاي پژوهشي و … وبررسي الگوريتمهاي شبکهاي گيريد* داده کاوي (Data Mrining) در زمينه هاي دسته بندي (Classification)، خوشه بندي (Clustering)، پيش بيني (Prediction)، انتخاب ويژگي (Feature Selection) و قواعدانجمني (Association Rules) با*وب سرويس و....الگوريتمlulea*سيستم هاي چندعامله ژنتيك* شبكه عصبي *هوش مصنوعي * شبيه سازي *بهينه سازي *سمينار*–الگوريتم چندهدفه* تكاملي *سيمولينک*بينايي ماشين*فازيکامينز*. Image Processing amp; Machine vision* SIMULINK, cloud storagerو IMAGE PROCESSING و GENETIC ALGORITHM و NEURAL NETWORK*و FUZZY LOGIC Steganalysis Facial expression Face recognition Texture segmentation Image retrieval Image segmentation Color Demosaicing ... Machine Vision: Object tracking( with all kind of methods) for various purposes Multiple Object Tracking Object Tracking with motion blur Blind motion blur deconvolution line based structure from motion Geometrical enhancemen *webrecommendation پروژه هاي محيط سيمولينک (Simulink) پروژه هاي بازشناسي الگو (pattern recognition) پروژه هاي کدنويسي مختلف و پروژه هاي مرتبط با جعبه ابزارهاي: • Aerospace• neural network*• symbolic math*• comminucation*• bioinformatic*• curve fitting*• control system*• econometric• database*• datafeed*• filter design*• image acqusition*• signal processing*• optimization* انجام پروژه هاي حاوي پايگاه داده و پروژه هاي گرافيکي تحت تمامي زبان هاي برنامه نويسي 1 - شبکه هاي عصبي مصنوعي چند لايه پرسپترون2 - شبکه هاي عصبي مصنوعي با تابع پايه شعاعي3 - درختان تصميم *****ري طبقه بندي و رگرسيوني4 - مدل هاي درختي5 - ماشين هاي بردار حامي طبقه بندي و رگرسيوني5 - ماشين هاي بردار حامي طبقه بندي و رگرسيوني6 - سيستم هاي استنباط فازي7 - سيستم هاي استنباط فازي - عصبي8 - سيستم استنباط بيزين با استفاده از نرم افزارهاي: Clementine, SPSS, WEKA, Rapid Miner, Qnet, انجام پروژهاي برنامه نويسي دلفي ، جاوا ، ويژوال بيسيك ،وي بي دانت .وي بي 6*مطلب- پي اچ پي , ، اكسس ، سي شارپ اي اس پي *پارلوگ *پرولوگ *سي *سي پلاس پلاس *مولتيمديابيلدرو....*رديابي *مکانيابي *sar* الگوريتم تطبيقي ياد*****ري براي رتبه بندي : با رويکرد آتاماتاي ياد*****ر * شبکه هاي MANET براي کاربردهاي چند رسانه اي* ياد*****ري تقويتي براي تقسيم بار پردازشي در شبکه توزيع شده با معماري *****ريد* وسايل نقليه اي با قابليت شناسايي حملات Dos *بدافزاردرشبکه عصبي *بدافزارها وشناسايي آنها*c-means*Fuzzy k-means معماري سرويس گزا*داده گرا/*soaسيسستمهاي تشخيص نفوذ*کامپيوتري هاي بيومولکولي *سيگنال هاي الكتريكي بيو مـولـكـولي مرتب سازي شبکه Sorting-Network انجام پروژه هاي تلفن گويا ، برنامه هاي ارتباطي ، پاسخگوي خودکار ، سيستم پيغام *****ر و برنامه نويسي تحت شبکه پروژهاي شبکه حسگرو... دارنده بزرگترين بانک سورس هاي آماده به تمامي زبانهاي برنامه نويسي ( انجام شده توسط خود گروه ) پايتخت مقاله هاي جديدومعتبرباشبيه سازي *2015*2014*2013*2012*2011*2010 پروژه خودرامتخصانشان ارائه دهيدنه به موسسات انجام پروژه چون هم نمي دانند شما چه مي خواهيدوهم هزينه براي خوددريافت مي کنند درست وبا اطمينان انتخاب کنيد همراه مستندات و توضيحات کامل ، و خط به خط دستورات و نيز نحوه ساخت و چگونگي اجراي پروژه ها، بهمراه دايکيومنت (Document) تايپ شده و آماده براي صحافي بهمراه پشتيباني بعد از تحويل پروژه بعد ازتحقيق بررسي ازچند مورد تماس با ما درمورد کلاه برداري با استفاده ازاسم گروه پايتخت تحقيق وبررسي ما آغازگرديدپس ازجستجو دراينترنت متوجه شديم اشخاصي ديگري با استفاده نام اعتبارگروه نرم افزاري پايتخت اقدام به کلاه برداري و سوه استفاده ازطريق آگهي هاي همانندآگهي هاي گروه پايتخت نموده اند بدين وسيله گروه نرم افزاري پايتخت اعلام مي داردکه اين اشخاص به هيچ عنوان جزوه گروه ما نمي باشندوتنها تلفن پاسخ گو ازطريق گروه نرم افزاري پايتخت به شماره 09191022908مهندس خسروي مي باشد www.pcporoje.com 09191022908 خسروي گروه نرم افزاري پايتخت هيچ گونه مسئوليتي را جهت بي دقتي کاربران وسوه استفاده هاي احتمالي ازآنها نمي پذيرد انجام پروژه هاي برنامه نويسي دانشجوئي براي دروس دانشگاهي : * مباني کامپيوتر * برنامه سازي پيشرفته * سيستم هاي تجاري * ساختمان داده * طراحي الگوريتم * ذخيره و بازيابي اطلاعات * نظريه زبانها و ماشين ها * هوش مصنوعي * کامپايلر * ريزپردازنده,vhdl,z80,…IVR ، 8051 * شبکه هاي کامپيوتري * گرافيک کامپيوتري * مهندسي نرم افزار * پايگاه داده *كارآفريني *كارآموزي *مباحث ويژه *معماري کامپيوتر * سيستم عاملپيشرفته *ياد*****ري ماشين *پردازش موازي *روش تحقيق *سمينار *پردازش سيگنال *پردازش صوت *شبيه سازي وبهينه سازي * آزمايشگاه هاي (سيستم عامل ، ريزپردازنده ، مدار منطقي ، پايگاه داده) ليست زبانهاي برنامه نويسي تخصصي ما به شرح زير مي باشد: Database: SQLServer Access php Html Java J2EE J2me Assembly Matlab برنامه نويسيموبايل NET. تحت (Pocket PC) XML, AJAX, JavaScript) Oracle Ns2 Opnet ……, همراه :09191022908 خسروي ليست پروژه هاي آماده تحت تمامي زبانهاي برنامه نويسي سيستم آرشيو اطلاعات پروژه هاي دانشجويي سفارش پروزه ازدانشگاه انگلستان يک نانوايي مي خواهد سيستم توزيع خودش را بهينه کند سفارش پروژه ازدانشگاه انگلستان نرم افزارارسال اس ام اس وايميل سفارش پروزه ازدانشگاه ترکيه شبيه سازي ميل سرورياهو سفارش پروزه ازدانشگاه آلمان سيستم ام ارپي سفارش پروزه ازدانشگاه هند فروشگاه اينترنتي سفارش پروزه ازدانشگاه ما*****ي کتابخانه صوتي براي لينوکس سفارش پروزه ازدانشگاه مجارستان پياده سازي همکار به همکار شبکه سفارش پروژه ازدانشگاه دبي الگوريتم fcfs سفارش پروژه ازدانشگاه فيليپين دانلودرايگان پروژه هاي دانشجويي دارنده بزرگترين بانک سورس هاي آماده به تمامي زبانهاي برنامه نويسي ( انجام شده توسط خود گروه ) پايتخت درضمن برخي ازاين پروژهاهم تحت ويندوزدرآرشيوموجوداست وهم تحت وب برنامه اسباب بازي فروشي*حملات سياه چالانه AODV *مقاله هاي جديد ومعتبرباشبيه سازي 2015*2014*20113*2012*2011 *ارسال مقاله وسمينار*نويز*****ري تصوير* کاربردسيستمهايچندعاملهدريادگيريالکترونيک*وب معنايي وابزارهاي ان * تشخيص چهره روي تصوير و ويديو*حذف اثر حرکت از روي تصاوير*تخمين قدرت سيگنال در شبکه مخابراتي بي سيم و تعيين مکان بهينه براي فزستنده ها *بررسي و شبيه سازي مدل سينوسي سيگنال صحبت * بررسي و مقايسه سيستمهاي عامل بلادرنگ*بررسي پروتکل SRM در شبيه ساز NS-2*بررسي روشهاي کد کردن بردارهاي جابجايي در فشرده سازي سيگنالهاي ويد يويي*طراحي و ساخت اجزاء تکرار کننده GSM*پياده سازي کدينگ کانال Reed-Solomon بر روي سيگنال ويديو بي سيم*شناسايي چهره انسان در تصاوير رن******نهان نگاري تصاوير ديجيتال در حوزه ويولت* سيستمهاي ارسال ديجيتال صوت*جداسازي سيگنالهاي صوتي مخلوط شده به روش BSS* مطالعه و بررسي امضاء هاي ديجيتال*بررسي و شبيه سازي چيدمان بهينه ادوات شبکه هاي بدون سيم*بررسي الگوريتمهاي نهان نگاري تصوير و پياده سازي آنها*سيستم اتاق عمل *بررسي روشهاي مختلف حذف نويز در سيگنالهاي ديجيتال*تحليل روشهاي فضا- زمان در سيستمهاي مخابرات بي سيم*نهان نگاري صوتي*نهان نگاري تصاوير ديجيتال با استفاده از تبديل موجک *روشهاي تکراري براي جبران اعوجاج ناشي از درونيابي *MAC جهت دار در شبکه هاي بي سيم ad hoc * Taxonomy and Survey of Cloud Computing Systemscloud *storager*محاسبات ابري opnetشبيه سازي شبکه با استفاده از WIP** روشهاي حفاظت از اطلاعات در فرآيند انتقال و دريافت مقايسه بانك هاي اطلاعاتي اسكيوال واوراكل * امنيت ATM- پايگاه داده توزيع شده سيستم مرسولات پستي اداره پست به کمک معماري سرويس گرا و تکنيک model_driven engineering شبيه سازي ns2 *تشخيص چهره انسان به روش تحليل تفکيکي خطي دو بعدي( 2D-LDA به همراه مقاله *تشخيص حرکت از طريق ورودي دوربين يا وبکم* تشخيص کارکتر و عدد در تصوير OCR* تشخيص عدد فارسي در تصوير (به همراه آموزش فارسي)* تشخيص حروف فارسي در تصوير به روش تطبيق الگو* تشخيص حروف فارسي در تصوير به روش شبکه عصبي* شبيه سازي مدولاسيون پالسهاي كدشده PCM* شبيه سازي و بررسي انواع اتصال کوتاه در ژنراتور* شبيه سازي ورقه کردن ف****** شبيه سازي بازوي ربات (به همراه مقاله)* ترميم تصوير Image *طراحي مدارهاي *ابرکامپيوترها*داده هاي با حجم بسياربالا inpainting* ترميم ويدئو Video inpainting** برنامه تشخيص بارکد (پردازش تصوير) اتحاديهخريدكارمندانوخريدكالاهايمشابهبهافراد*بررسي مکانيزم احرازهويت *fcfs*الگوريتم کاهش نويز در تصويرNoise Canceling*بررسي کليه توابع توزيع در متلبDistributions functions* پياده سازي روش گوشه شمال غربي *North-West Corner Method* برنامه تبديل اتوماتيک کد فرترن به متلب بهينه سازي تنش در تراس *پنهان‌نگاريتصاوير يا Steganography با متلب*• بدست آوردن پروفايل دما در سطح مقطع steak در زمان هاي مختلف بعد از قرار گرفتن در ظرف روغن شبيه سازي راکتور batch (ناپيوسته) و رسم نمودار غلظت ها* يكسوساز سه فاز تريستوري با *پروژه ياد*****ري ماشين يا تشخيص جنسيت زن مرد *machine learning**• تشخيص لبه تصوير توسط الگوريتم کلوني مورچه ها ACO (به همراه مقاله) پردازشتصويرWavelet بهبود مدل کاربر در وب¬سايت بصورت خودکار با استفاده ازمعناشناسي با مفاهيم خاص دامنه*پروژه هاي مهندسي معكوس *طراحي سايت b2b تشخيص هويت افراد با استفاد شناساي كف دست *نظرسنجي *الگوريتم پنتيک چندهدفه * • محاسبه جريان درون لوله و عدد رينولدز به کمک روابط سوامي و جين و دارسي-ويسباخ • شبيه سازي کنترل مقاوم عصب* تحليگرلغوي*چندضلعي *جدول متقاطع * فرستادن ايميل *شبيه سازي پروتکل مسيريابي شبکه حسگر بي سيم باآپ نت پروژه هاي تشخيص هويت :عنبه *اثرانگشت *تشخيص چهره به چهره *كف دست * الگوريتم هاي خوشه بندي در شبکه هاي حسگر موبايلعنوان* امضاي ديجيتال**امنيت اطلاعات * بررسي امنيت شبکه در مقوله پدافند غير عامل * بيومتريک (Biometric)*الگوريتم زنبورعسل *دنباله کاوي *شناسايي خط *شناسايي صورت *بينايي ماشين*هوش مصنوعي دربازي *وب معنايي*آنتولوژي *فشرده سازي تصوير*پردازش صوت * امنيت درپايگاه توزيع شده*فايل هاي ويرانگر - - - سيستم فروش و صورتحساب- سيستم حضورغياب با اثر انگشت - سيستم صندوق رستوراني و فروشگاهي با سخت افزار و نرم افزار POS گروه مهندسي پايتخت - انجام پروژه هاي دانشجويي شما با قيمتي مناسب پذيرش سفارش پروژه داخل وخارج ازکشور هرگونه کپي برداري ازآگهي غيرمجازمي باشد جهت سفارش پروژه تماس ب*****ريد ازديگرپروژهاي ماديدبفرماييد www.pcporoje.com http://tezcomputer.com http://tezcomputercom.blogfa.com مهندس خسروي 09191022908 جهت سفارش پروژه يا نياز به هرگونه اطلاع رساني فقط با ايميل زير با مادر تماس باشيد infoporoje.net@gmail.com
           پايان نامه کامپيوتري کارشناسي ارشد نوشتن پروپزال و سمينار    
انجام کليه پروژه هاي دانشجوييدرسراسرايران بيش از 20 پروژه برنامه نويسيوپايان نامه پروپوزال هاي دانشجويي از دپارتمان علوم رايانه دانشگاه هاي کلمبيا هندما*****ي آلمان*سوئد*دانمارک *انگلستان *فيليپين *دبي*ترکيه و... دربانک پروژه پايتخت توسط خودگروه نرم افزاري پايتخت انجام پروژه هاي دانشجويي براي دانشجويانايراني داخل وخارجازکشوررشته کامپيوتر Several suggested student programming projects for computer science majors (undergraduate, masters and PhD. students) from the Network Security Lab at Columbi@انجام پايان نامه وپروپوزال هاي دانشجويي مقاطع تحصيلي کارداني کارشناسي کارشناسي ارشددكترا و....دانشگاه هاي داخل و خارج از کشوررشته کامپيوترنرم افزار*معماري کامپيوتر*هوش مصنوعي و فناوري اطلاعات و.........امنيت شبکه* مخابرات امن *تجارت الکترونيک تحت تمامي زبانها برنامه نويسي خدمات مشاوره اي: مشاوره رايگان انتخاب موضوع پايان نامه - انجام تمامي خدمات مربوط به تهيه پيشنهاديه پايان نامه ( proposal ) مشاوره و تدوين پايان نامه هاي مرتبط با رشته هاي فوقالذکرفناوري اطلاعات* کامپيوترو.. ارائه تمامي فصول مربوط به پايان نامه ، مطابق با جدول زمانبنديتوافقي مشاوره و طراحي پرسش نامه و انجام مصاحبه و تجزيه و تحليل اطلاعات استخراجي با استفاده ازنرم افزارهاي مرتبط و در انتها ارائه مقاله اي علمي – پژوهشيبراي ارائه نشريات معتبر داخلي (علمي-پژوهشي) و خارجي (ISI)IEEE*نگارش، تدوين و اديت مقاله هاي isi براي ارسال به ژورنال هاي معتبر با ايمپکت فاکتور بالا رشته فناوري اطلاعات * گرايش تجارت* الکترونيک -*کارشناسي ارشد درمورد نقش erp و سيستم هاي اطلاعاتي و ريسک در هوش تجاري بررسي انواع چالش‌هاي موجود در رايانش ابري و رايانش توري(Cloud computing amp; Grid computing) شامل مباحث امنيت (Security)، ذخيره‌سازي (Storage)، کارايي (Performance)، دسترس‌پذيري (Availability) و مديريت، تخصيص و زمانبندي منابع (Allocation and Scheduling Resources)، توازن بار(Load Balancing). بررسي انواع الگوريتم‌ها در حوزه‌ي داده‌کاوي (Data Mining)؛ طبقه‌بندي(Classification)، خوشه‌بندي(Clustering)، کشف قوانين انجمني(Association Rules)، پيش‌بيني سري‌زماني(Time Series Prediction)، انتخاب ويژ***** (Feature Selection) و استخراج ويژ***** (Feature Extraction)، کاهش بعد(Dimensionality Reduction)، شخصي سازي نتايج موتورهاي جستجو و داده‌کاوي اطلاعات آنها(Search Engine). بررسي انواع الگوريتم‌ها در حوزه‌ي شبکه‌هاي اجتماعي(Social Network)؛ کشف ساختار(structure Detection ) کشف اجتماعات(Community Detection)، تشخيص اسپم(Spam Filter). بررسي انواع تکنولوژي‌هاي ذخيره داده اي، Sql، NoSql، نگاشت کاهش (MapReduce)، هادوپ(Hadoop)، کار با Big Data. بررسي، مقايسه و بهبود انواع الگوريتم‌هاي مکاشفه‌اي، فرا مکاشفه‌اي و چند هدفه مانند الگوريتم ژنتيک(Genetic Algorithm, MOGA, NSGAII)، الگوريتم ازدحام ذرات(PSO, MOPSO)، الگوريتم مورچگان(Ant Colony)، الگوريتم زنبور عسل(Bee clolony)، الگوريتم رقابت استعماري(ICA)، الگوريتم فرهن***** (Cultural Algorithm)، الگوريتم تکامل تفاضلي(DE). بررسي انواع الگوريتم‌هاي پردازش تصوير(IMAGE PROCESSING)؛ تشخيص چهره(Face Recognation)، قطعه‌بندي تصاوير(Image Segmentation)، فشرده‌سازي تصاوير(Image Compression)، نهان‌نگاري تصاوير(Watermarking). بررسي انواع الگوريتم‌هاي ياد*****ر؛ شبکه‌هاي عصبي (ANFIS, ANN)، شبکه‌هاي بيزين(Bayesian Network)، ماشين بردار پشتيبان(SVM). استفاده از نرم‌افزار‌هاي Visual Studio، متلب(Matlab)، وکا(Weka)، رپيدماينر(Rapidminer)، Clementine، کلودسيم(Cloudsim). استفاده از زبان‌هاي Python, Java, C, C#, C++, DBMS, MySql, Sql Server, VB.NET, PHP تدوين پروپوزال، اجراي پايان نامه و طرح هاي پژوهشي و … وبررسي الگوريتمهاي شبکهاي گيريد* داده کاوي (Data Mrining) در زمينه هاي دسته بندي (Classification)، خوشه بندي (Clustering)، پيش بيني (Prediction)، انتخاب ويژگي (Feature Selection) و قواعدانجمني (Association Rules) با*وب سرويس و....الگوريتمlulea*سيستم هاي چندعامله ژنتيك* شبكه عصبي *هوش مصنوعي * شبيه سازي *بهينه سازي *سمينار*–الگوريتم چندهدفه* تكاملي *سيمولينک*بينايي ماشين*فازيکامينز*. Image Processing amp; Machine vision* SIMULINK, cloud storagerو IMAGE PROCESSING و GENETIC ALGORITHM و NEURAL NETWORK*و FUZZY LOGIC Steganalysis Facial expression Face recognition Texture segmentation Image retrieval Image segmentation Color Demosaicing ... Machine Vision: Object tracking( with all kind of methods) for various purposes Multiple Object Tracking Object Tracking with motion blur Blind motion blur deconvolution line based structure from motion Geometrical enhancemen *webrecommendation پروژه هاي محيط سيمولينک (Simulink) پروژه هاي بازشناسي الگو (pattern recognition) پروژه هاي کدنويسي مختلف و پروژه هاي مرتبط با جعبه ابزارهاي: • Aerospace• neural network*• symbolic math*• comminucation*• bioinformatic*• curve fitting*• control system*• econometric• database*• datafeed*• filter design*• image acqusition*• signal processing*• optimization* انجام پروژه هاي حاوي پايگاه داده و پروژه هاي گرافيکي تحت تمامي زبان هاي برنامه نويسي 1 - شبکه هاي عصبي مصنوعي چند لايه پرسپترون2 - شبکه هاي عصبي مصنوعي با تابع پايه شعاعي3 - درختان تصميم *****ري طبقه بندي و رگرسيوني4 - مدل هاي درختي5 - ماشين هاي بردار حامي طبقه بندي و رگرسيوني5 - ماشين هاي بردار حامي طبقه بندي و رگرسيوني6 - سيستم هاي استنباط فازي7 - سيستم هاي استنباط فازي - عصبي8 - سيستم استنباط بيزين با استفاده از نرم افزارهاي: Clementine, SPSS, WEKA, Rapid Miner, Qnet, انجام پروژهاي برنامه نويسي دلفي ، جاوا ، ويژوال بيسيك ،وي بي دانت .وي بي 6*مطلب- پي اچ پي , ، اكسس ، سي شارپ اي اس پي *پارلوگ *پرولوگ *سي *سي پلاس پلاس *مولتيمديابيلدرو....*رديابي *مکانيابي *sar* الگوريتم تطبيقي ياد*****ري براي رتبه بندي : با رويکرد آتاماتاي ياد*****ر * شبکه هاي MANET براي کاربردهاي چند رسانه اي* ياد*****ري تقويتي براي تقسيم بار پردازشي در شبکه توزيع شده با معماري *****ريد* وسايل نقليه اي با قابليت شناسايي حملات Dos *بدافزاردرشبکه عصبي *بدافزارها وشناسايي آنها*c-means*Fuzzy k-means معماري سرويس گزا*داده گرا/*soaسيسستمهاي تشخيص نفوذ*کامپيوتري هاي بيومولکولي *سيگنال هاي الكتريكي بيو مـولـكـولي مرتب سازي شبکه Sorting-Network انجام پروژه هاي تلفن گويا ، برنامه هاي ارتباطي ، پاسخگوي خودکار ، سيستم پيغام *****ر و برنامه نويسي تحت شبکه پروژهاي شبکه حسگرو... دارنده بزرگترين بانک سورس هاي آماده به تمامي زبانهاي برنامه نويسي ( انجام شده توسط خود گروه ) پايتخت مقاله هاي جديدومعتبرباشبيه سازي *2015*2014*2013*2012*2011*2010 پروژه خودرامتخصانشان ارائه دهيدنه به موسسات انجام پروژه چون هم نمي دانند شما چه مي خواهيدوهم هزينه براي خوددريافت مي کنند درست وبا اطمينان انتخاب کنيد همراه مستندات و توضيحات کامل ، و خط به خط دستورات و نيز نحوه ساخت و چگونگي اجراي پروژه ها، بهمراه دايکيومنت (Document) تايپ شده و آماده براي صحافي بهمراه پشتيباني بعد از تحويل پروژه بعد ازتحقيق بررسي ازچند مورد تماس با ما درمورد کلاه برداري با استفاده ازاسم گروه پايتخت تحقيق وبررسي ما آغازگرديدپس ازجستجو دراينترنت متوجه شديم اشخاصي ديگري با استفاده نام اعتبارگروه نرم افزاري پايتخت اقدام به کلاه برداري و سوه استفاده ازطريق آگهي هاي همانندآگهي هاي گروه پايتخت نموده اند بدين وسيله گروه نرم افزاري پايتخت اعلام مي داردکه اين اشخاص به هيچ عنوان جزوه گروه ما نمي باشندوتنها تلفن پاسخ گو ازطريق گروه نرم افزاري پايتخت به شماره 09191022908مهندس خسروي مي باشد www.pcporoje.com 09191022908 خسروي گروه نرم افزاري پايتخت هيچ گونه مسئوليتي را جهت بي دقتي کاربران وسوه استفاده هاي احتمالي ازآنها نمي پذيرد انجام پروژه هاي برنامه نويسي دانشجوئي براي دروس دانشگاهي : * مباني کامپيوتر * برنامه سازي پيشرفته * سيستم هاي تجاري * ساختمان داده * طراحي الگوريتم * ذخيره و بازيابي اطلاعات * نظريه زبانها و ماشين ها * هوش مصنوعي * کامپايلر * ريزپردازنده,vhdl,z80,…IVR ، 8051 * شبکه هاي کامپيوتري * گرافيک کامپيوتري * مهندسي نرم افزار * پايگاه داده *كارآفريني *كارآموزي *مباحث ويژه *معماري کامپيوتر * سيستم عاملپيشرفته *ياد*****ري ماشين *پردازش موازي *روش تحقيق *سمينار *پردازش سيگنال *پردازش صوت *شبيه سازي وبهينه سازي * آزمايشگاه هاي (سيستم عامل ، ريزپردازنده ، مدار منطقي ، پايگاه داده) ليست زبانهاي برنامه نويسي تخصصي ما به شرح زير مي باشد: Database: SQLServer Access php Html Java J2EE J2me Assembly Matlab برنامه نويسيموبايل NET. تحت (Pocket PC) XML, AJAX, JavaScript) Oracle Ns2 Opnet ……, همراه :09191022908 خسروي ليست پروژه هاي آماده تحت تمامي زبانهاي برنامه نويسي سيستم آرشيو اطلاعات پروژه هاي دانشجويي سفارش پروزه ازدانشگاه انگلستان يک نانوايي مي خواهد سيستم توزيع خودش را بهينه کند سفارش پروژه ازدانشگاه انگلستان نرم افزارارسال اس ام اس وايميل سفارش پروزه ازدانشگاه ترکيه شبيه سازي ميل سرورياهو سفارش پروزه ازدانشگاه آلمان سيستم ام ارپي سفارش پروزه ازدانشگاه هند فروشگاه اينترنتي سفارش پروزه ازدانشگاه ما*****ي کتابخانه صوتي براي لينوکس سفارش پروزه ازدانشگاه مجارستان پياده سازي همکار به همکار شبکه سفارش پروژه ازدانشگاه دبي الگوريتم fcfs سفارش پروژه ازدانشگاه فيليپين دانلودرايگان پروژه هاي دانشجويي دارنده بزرگترين بانک سورس هاي آماده به تمامي زبانهاي برنامه نويسي ( انجام شده توسط خود گروه ) پايتخت درضمن برخي ازاين پروژهاهم تحت ويندوزدرآرشيوموجوداست وهم تحت وب برنامه اسباب بازي فروشي*حملات سياه چالانه AODV *مقاله هاي جديد ومعتبرباشبيه سازي 2015*2014*20113*2012*2011 *ارسال مقاله وسمينار*نويز*****ري تصوير* کاربردسيستمهايچندعاملهدريادگيريالکترونيک*وب معنايي وابزارهاي ان * تشخيص چهره روي تصوير و ويديو*حذف اثر حرکت از روي تصاوير*تخمين قدرت سيگنال در شبکه مخابراتي بي سيم و تعيين مکان بهينه براي فزستنده ها *بررسي و شبيه سازي مدل سينوسي سيگنال صحبت * بررسي و مقايسه سيستمهاي عامل بلادرنگ*بررسي پروتکل SRM در شبيه ساز NS-2*بررسي روشهاي کد کردن بردارهاي جابجايي در فشرده سازي سيگنالهاي ويد يويي*طراحي و ساخت اجزاء تکرار کننده GSM*پياده سازي کدينگ کانال Reed-Solomon بر روي سيگنال ويديو بي سيم*شناسايي چهره انسان در تصاوير رن******نهان نگاري تصاوير ديجيتال در حوزه ويولت* سيستمهاي ارسال ديجيتال صوت*جداسازي سيگنالهاي صوتي مخلوط شده به روش BSS* مطالعه و بررسي امضاء هاي ديجيتال*بررسي و شبيه سازي چيدمان بهينه ادوات شبکه هاي بدون سيم*بررسي الگوريتمهاي نهان نگاري تصوير و پياده سازي آنها*سيستم اتاق عمل *بررسي روشهاي مختلف حذف نويز در سيگنالهاي ديجيتال*تحليل روشهاي فضا- زمان در سيستمهاي مخابرات بي سيم*نهان نگاري صوتي*نهان نگاري تصاوير ديجيتال با استفاده از تبديل موجک *روشهاي تکراري براي جبران اعوجاج ناشي از درونيابي *MAC جهت دار در شبکه هاي بي سيم ad hoc * Taxonomy and Survey of Cloud Computing Systemscloud *storager*محاسبات ابري opnetشبيه سازي شبکه با استفاده از WIP** روشهاي حفاظت از اطلاعات در فرآيند انتقال و دريافت مقايسه بانك هاي اطلاعاتي اسكيوال واوراكل * امنيت ATM- پايگاه داده توزيع شده سيستم مرسولات پستي اداره پست به کمک معماري سرويس گرا و تکنيک model_driven engineering شبيه سازي ns2 *تشخيص چهره انسان به روش تحليل تفکيکي خطي دو بعدي( 2D-LDA به همراه مقاله *تشخيص حرکت از طريق ورودي دوربين يا وبکم* تشخيص کارکتر و عدد در تصوير OCR* تشخيص عدد فارسي در تصوير (به همراه آموزش فارسي)* تشخيص حروف فارسي در تصوير به روش تطبيق الگو* تشخيص حروف فارسي در تصوير به روش شبکه عصبي* شبيه سازي مدولاسيون پالسهاي كدشده PCM* شبيه سازي و بررسي انواع اتصال کوتاه در ژنراتور* شبيه سازي ورقه کردن ف****** شبيه سازي بازوي ربات (به همراه مقاله)* ترميم تصوير Image *طراحي مدارهاي *ابرکامپيوترها*داده هاي با حجم بسياربالا inpainting* ترميم ويدئو Video inpainting** برنامه تشخيص بارکد (پردازش تصوير) اتحاديهخريدكارمندانوخريدكالاهايمشابهبهافراد*بررسي مکانيزم احرازهويت *fcfs*الگوريتم کاهش نويز در تصويرNoise Canceling*بررسي کليه توابع توزيع در متلبDistributions functions* پياده سازي روش گوشه شمال غربي *North-West Corner Method* برنامه تبديل اتوماتيک کد فرترن به متلب بهينه سازي تنش در تراس *پنهان‌نگاريتصاوير يا Steganography با متلب*• بدست آوردن پروفايل دما در سطح مقطع steak در زمان هاي مختلف بعد از قرار گرفتن در ظرف روغن شبيه سازي راکتور batch (ناپيوسته) و رسم نمودار غلظت ها* يكسوساز سه فاز تريستوري با *پروژه ياد*****ري ماشين يا تشخيص جنسيت زن مرد *machine learning**• تشخيص لبه تصوير توسط الگوريتم کلوني مورچه ها ACO (به همراه مقاله) پردازشتصويرWavelet بهبود مدل کاربر در وب¬سايت بصورت خودکار با استفاده ازمعناشناسي با مفاهيم خاص دامنه*پروژه هاي مهندسي معكوس *طراحي سايت b2b تشخيص هويت افراد با استفاد شناساي كف دست *نظرسنجي *الگوريتم پنتيک چندهدفه * • محاسبه جريان درون لوله و عدد رينولدز به کمک روابط سوامي و جين و دارسي-ويسباخ • شبيه سازي کنترل مقاوم عصب* تحليگرلغوي*چندضلعي *جدول متقاطع * فرستادن ايميل *شبيه سازي پروتکل مسيريابي شبکه حسگر بي سيم باآپ نت پروژه هاي تشخيص هويت :عنبه *اثرانگشت *تشخيص چهره به چهره *كف دست * الگوريتم هاي خوشه بندي در شبکه هاي حسگر موبايلعنوان* امضاي ديجيتال**امنيت اطلاعات * بررسي امنيت شبکه در مقوله پدافند غير عامل * بيومتريک (Biometric)*الگوريتم زنبورعسل *دنباله کاوي *شناسايي خط *شناسايي صورت *بينايي ماشين*هوش مصنوعي دربازي *وب معنايي*آنتولوژي *فشرده سازي تصوير*پردازش صوت * امنيت درپايگاه توزيع شده*فايل هاي ويرانگر - - - سيستم فروش و صورتحساب- سيستم حضورغياب با اثر انگشت - سيستم صندوق رستوراني و فروشگاهي با سخت افزار و نرم افزار POS گروه مهندسي پايتخت - انجام پروژه هاي دانشجويي شما با قيمتي مناسب پذيرش سفارش پروژه داخل وخارج ازکشور هرگونه کپي برداري ازآگهي غيرمجازمي باشد جهت سفارش پروژه تماس ب*****ريد ازديگرپروژهاي ماديدبفرماييد www.pcporoje.com http://tezcomputer.com http://tezcomputercom.blogfa.com مهندس خسروي 09191022908 جهت سفارش پروژه يا نياز به هرگونه اطلاع رساني فقط با ايميل زير با مادر تماس باشيد infoporoje.net@gmail.com
           انجام پايان نامه کارشناسي ارشد درمتلب matlab    
انجام کليه پروژه هاي دانشجوييدرسراسرايران بيش از 20 پروژه برنامه نويسيوپايان نامه پروپوزال هاي دانشجويي از دپارتمان علوم رايانه دانشگاه هاي کلمبيا هندما*****ي آلمان*سوئد*دانمارک *انگلستان *فيليپين *دبي*ترکيه و... دربانک پروژه پايتخت توسط خودگروه نرم افزاري پايتخت انجام پروژه هاي دانشجويي براي دانشجويانايراني داخل وخارجازکشوررشته کامپيوتر Several suggested student programming projects for computer science majors (undergraduate, masters and PhD. students) from the Network Security Lab at Columbi@انجام پايان نامه وپروپوزال هاي دانشجويي مقاطع تحصيلي کارداني کارشناسي کارشناسي ارشددكترا و....دانشگاه هاي داخل و خارج از کشوررشته کامپيوترنرم افزار*معماري کامپيوتر*هوش مصنوعي و فناوري اطلاعات و.........امنيت شبکه* مخابرات امن *تجارت الکترونيک تحت تمامي زبانها برنامه نويسي خدمات مشاوره اي: مشاوره رايگان انتخاب موضوع پايان نامه - انجام تمامي خدمات مربوط به تهيه پيشنهاديه پايان نامه ( proposal ) مشاوره و تدوين پايان نامه هاي مرتبط با رشته هاي فوقالذکرفناوري اطلاعات* کامپيوترو.. ارائه تمامي فصول مربوط به پايان نامه ، مطابق با جدول زمانبنديتوافقي مشاوره و طراحي پرسش نامه و انجام مصاحبه و تجزيه و تحليل اطلاعات استخراجي با استفاده ازنرم افزارهاي مرتبط و در انتها ارائه مقاله اي علمي – پژوهشيبراي ارائه نشريات معتبر داخلي (علمي-پژوهشي) و خارجي (ISI)IEEE*نگارش، تدوين و اديت مقاله هاي isi براي ارسال به ژورنال هاي معتبر با ايمپکت فاکتور بالا رشته فناوري اطلاعات * گرايش تجارت* الکترونيک -*کارشناسي ارشد درمورد نقش erp و سيستم هاي اطلاعاتي و ريسک در هوش تجاري بررسي انواع چالش‌هاي موجود در رايانش ابري و رايانش توري(Cloud computing amp; Grid computing) شامل مباحث امنيت (Security)، ذخيره‌سازي (Storage)، کارايي (Performance)، دسترس‌پذيري (Availability) و مديريت، تخصيص و زمانبندي منابع (Allocation and Scheduling Resources)، توازن بار(Load Balancing). بررسي انواع الگوريتم‌ها در حوزه‌ي داده‌کاوي (Data Mining)؛ طبقه‌بندي(Classification)، خوشه‌بندي(Clustering)، کشف قوانين انجمني(Association Rules)، پيش‌بيني سري‌زماني(Time Series Prediction)، انتخاب ويژ***** (Feature Selection) و استخراج ويژ***** (Feature Extraction)، کاهش بعد(Dimensionality Reduction)، شخصي سازي نتايج موتورهاي جستجو و داده‌کاوي اطلاعات آنها(Search Engine). بررسي انواع الگوريتم‌ها در حوزه‌ي شبکه‌هاي اجتماعي(Social Network)؛ کشف ساختار(structure Detection ) کشف اجتماعات(Community Detection)، تشخيص اسپم(Spam Filter). بررسي انواع تکنولوژي‌هاي ذخيره داده اي، Sql، NoSql، نگاشت کاهش (MapReduce)، هادوپ(Hadoop)، کار با Big Data. بررسي، مقايسه و بهبود انواع الگوريتم‌هاي مکاشفه‌اي، فرا مکاشفه‌اي و چند هدفه مانند الگوريتم ژنتيک(Genetic Algorithm, MOGA, NSGAII)، الگوريتم ازدحام ذرات(PSO, MOPSO)، الگوريتم مورچگان(Ant Colony)، الگوريتم زنبور عسل(Bee clolony)، الگوريتم رقابت استعماري(ICA)، الگوريتم فرهن***** (Cultural Algorithm)، الگوريتم تکامل تفاضلي(DE). بررسي انواع الگوريتم‌هاي پردازش تصوير(IMAGE PROCESSING)؛ تشخيص چهره(Face Recognation)، قطعه‌بندي تصاوير(Image Segmentation)، فشرده‌سازي تصاوير(Image Compression)، نهان‌نگاري تصاوير(Watermarking). بررسي انواع الگوريتم‌هاي ياد*****ر؛ شبکه‌هاي عصبي (ANFIS, ANN)، شبکه‌هاي بيزين(Bayesian Network)، ماشين بردار پشتيبان(SVM). استفاده از نرم‌افزار‌هاي Visual Studio، متلب(Matlab)، وکا(Weka)، رپيدماينر(Rapidminer)، Clementine، کلودسيم(Cloudsim). استفاده از زبان‌هاي Python, Java, C, C#, C++, DBMS, MySql, Sql Server, VB.NET, PHP تدوين پروپوزال، اجراي پايان نامه و طرح هاي پژوهشي و … وبررسي الگوريتمهاي شبکهاي گيريد* داده کاوي (Data Mrining) در زمينه هاي دسته بندي (Classification)، خوشه بندي (Clustering)، پيش بيني (Prediction)، انتخاب ويژگي (Feature Selection) و قواعدانجمني (Association Rules) با*وب سرويس و....الگوريتمlulea*سيستم هاي چندعامله ژنتيك* شبكه عصبي *هوش مصنوعي * شبيه سازي *بهينه سازي *سمينار*–الگوريتم چندهدفه* تكاملي *سيمولينک*بينايي ماشين*فازيکامينز*. Image Processing amp; Machine vision* SIMULINK, cloud storagerو IMAGE PROCESSING و GENETIC ALGORITHM و NEURAL NETWORK*و FUZZY LOGIC Steganalysis Facial expression Face recognition Texture segmentation Image retrieval Image segmentation Color Demosaicing ... Machine Vision: Object tracking( with all kind of methods) for various purposes Multiple Object Tracking Object Tracking with motion blur Blind motion blur deconvolution line based structure from motion Geometrical enhancemen *webrecommendation پروژه هاي محيط سيمولينک (Simulink) پروژه هاي بازشناسي الگو (pattern recognition) پروژه هاي کدنويسي مختلف و پروژه هاي مرتبط با جعبه ابزارهاي: • Aerospace• neural network*• symbolic math*• comminucation*• bioinformatic*• curve fitting*• control system*• econometric• database*• datafeed*• filter design*• image acqusition*• signal processing*• optimization* انجام پروژه هاي حاوي پايگاه داده و پروژه هاي گرافيکي تحت تمامي زبان هاي برنامه نويسي 1 - شبکه هاي عصبي مصنوعي چند لايه پرسپترون2 - شبکه هاي عصبي مصنوعي با تابع پايه شعاعي3 - درختان تصميم *****ري طبقه بندي و رگرسيوني4 - مدل هاي درختي5 - ماشين هاي بردار حامي طبقه بندي و رگرسيوني5 - ماشين هاي بردار حامي طبقه بندي و رگرسيوني6 - سيستم هاي استنباط فازي7 - سيستم هاي استنباط فازي - عصبي8 - سيستم استنباط بيزين با استفاده از نرم افزارهاي: Clementine, SPSS, WEKA, Rapid Miner, Qnet, انجام پروژهاي برنامه نويسي دلفي ، جاوا ، ويژوال بيسيك ،وي بي دانت .وي بي 6*مطلب- پي اچ پي , ، اكسس ، سي شارپ اي اس پي *پارلوگ *پرولوگ *سي *سي پلاس پلاس *مولتيمديابيلدرو....*رديابي *مکانيابي *sar* الگوريتم تطبيقي ياد*****ري براي رتبه بندي : با رويکرد آتاماتاي ياد*****ر * شبکه هاي MANET براي کاربردهاي چند رسانه اي* ياد*****ري تقويتي براي تقسيم بار پردازشي در شبکه توزيع شده با معماري *****ريد* وسايل نقليه اي با قابليت شناسايي حملات Dos *بدافزاردرشبکه عصبي *بدافزارها وشناسايي آنها*c-means*Fuzzy k-means معماري سرويس گزا*داده گرا/*soaسيسستمهاي تشخيص نفوذ*کامپيوتري هاي بيومولکولي *سيگنال هاي الكتريكي بيو مـولـكـولي مرتب سازي شبکه Sorting-Network انجام پروژه هاي تلفن گويا ، برنامه هاي ارتباطي ، پاسخگوي خودکار ، سيستم پيغام *****ر و برنامه نويسي تحت شبکه پروژهاي شبکه حسگرو... دارنده بزرگترين بانک سورس هاي آماده به تمامي زبانهاي برنامه نويسي ( انجام شده توسط خود گروه ) پايتخت مقاله هاي جديدومعتبرباشبيه سازي *2015*2014*2013*2012*2011*2010 پروژه خودرامتخصانشان ارائه دهيدنه به موسسات انجام پروژه چون هم نمي دانند شما چه مي خواهيدوهم هزينه براي خوددريافت مي کنند درست وبا اطمينان انتخاب کنيد همراه مستندات و توضيحات کامل ، و خط به خط دستورات و نيز نحوه ساخت و چگونگي اجراي پروژه ها، بهمراه دايکيومنت (Document) تايپ شده و آماده براي صحافي بهمراه پشتيباني بعد از تحويل پروژه بعد ازتحقيق بررسي ازچند مورد تماس با ما درمورد کلاه برداري با استفاده ازاسم گروه پايتخت تحقيق وبررسي ما آغازگرديدپس ازجستجو دراينترنت متوجه شديم اشخاصي ديگري با استفاده نام اعتبارگروه نرم افزاري پايتخت اقدام به کلاه برداري و سوه استفاده ازطريق آگهي هاي همانندآگهي هاي گروه پايتخت نموده اند بدين وسيله گروه نرم افزاري پايتخت اعلام مي داردکه اين اشخاص به هيچ عنوان جزوه گروه ما نمي باشندوتنها تلفن پاسخ گو ازطريق گروه نرم افزاري پايتخت به شماره 09191022908مهندس خسروي مي باشد www.pcporoje.com 09191022908 خسروي گروه نرم افزاري پايتخت هيچ گونه مسئوليتي را جهت بي دقتي کاربران وسوه استفاده هاي احتمالي ازآنها نمي پذيرد انجام پروژه هاي برنامه نويسي دانشجوئي براي دروس دانشگاهي : * مباني کامپيوتر * برنامه سازي پيشرفته * سيستم هاي تجاري * ساختمان داده * طراحي الگوريتم * ذخيره و بازيابي اطلاعات * نظريه زبانها و ماشين ها * هوش مصنوعي * کامپايلر * ريزپردازنده,vhdl,z80,…IVR ، 8051 * شبکه هاي کامپيوتري * گرافيک کامپيوتري * مهندسي نرم افزار * پايگاه داده *كارآفريني *كارآموزي *مباحث ويژه *معماري کامپيوتر * سيستم عاملپيشرفته *ياد*****ري ماشين *پردازش موازي *روش تحقيق *سمينار *پردازش سيگنال *پردازش صوت *شبيه سازي وبهينه سازي * آزمايشگاه هاي (سيستم عامل ، ريزپردازنده ، مدار منطقي ، پايگاه داده) ليست زبانهاي برنامه نويسي تخصصي ما به شرح زير مي باشد: Database: SQLServer Access php Html Java J2EE J2me Assembly Matlab برنامه نويسيموبايل NET. تحت (Pocket PC) XML, AJAX, JavaScript) Oracle Ns2 Opnet ……, همراه :09191022908 خسروي ليست پروژه هاي آماده تحت تمامي زبانهاي برنامه نويسي سيستم آرشيو اطلاعات پروژه هاي دانشجويي سفارش پروزه ازدانشگاه انگلستان يک نانوايي مي خواهد سيستم توزيع خودش را بهينه کند سفارش پروژه ازدانشگاه انگلستان نرم افزارارسال اس ام اس وايميل سفارش پروزه ازدانشگاه ترکيه شبيه سازي ميل سرورياهو سفارش پروزه ازدانشگاه آلمان سيستم ام ارپي سفارش پروزه ازدانشگاه هند فروشگاه اينترنتي سفارش پروزه ازدانشگاه ما*****ي کتابخانه صوتي براي لينوکس سفارش پروزه ازدانشگاه مجارستان پياده سازي همکار به همکار شبکه سفارش پروژه ازدانشگاه دبي الگوريتم fcfs سفارش پروژه ازدانشگاه فيليپين دانلودرايگان پروژه هاي دانشجويي دارنده بزرگترين بانک سورس هاي آماده به تمامي زبانهاي برنامه نويسي ( انجام شده توسط خود گروه ) پايتخت درضمن برخي ازاين پروژهاهم تحت ويندوزدرآرشيوموجوداست وهم تحت وب برنامه اسباب بازي فروشي*حملات سياه چالانه AODV *مقاله هاي جديد ومعتبرباشبيه سازي 2015*2014*20113*2012*2011 *ارسال مقاله وسمينار*نويز*****ري تصوير* کاربردسيستمهايچندعاملهدريادگيريالکترونيک*وب معنايي وابزارهاي ان * تشخيص چهره روي تصوير و ويديو*حذف اثر حرکت از روي تصاوير*تخمين قدرت سيگنال در شبکه مخابراتي بي سيم و تعيين مکان بهينه براي فزستنده ها *بررسي و شبيه سازي مدل سينوسي سيگنال صحبت * بررسي و مقايسه سيستمهاي عامل بلادرنگ*بررسي پروتکل SRM در شبيه ساز NS-2*بررسي روشهاي کد کردن بردارهاي جابجايي در فشرده سازي سيگنالهاي ويد يويي*طراحي و ساخت اجزاء تکرار کننده GSM*پياده سازي کدينگ کانال Reed-Solomon بر روي سيگنال ويديو بي سيم*شناسايي چهره انسان در تصاوير رن******نهان نگاري تصاوير ديجيتال در حوزه ويولت* سيستمهاي ارسال ديجيتال صوت*جداسازي سيگنالهاي صوتي مخلوط شده به روش BSS* مطالعه و بررسي امضاء هاي ديجيتال*بررسي و شبيه سازي چيدمان بهينه ادوات شبکه هاي بدون سيم*بررسي الگوريتمهاي نهان نگاري تصوير و پياده سازي آنها*سيستم اتاق عمل *بررسي روشهاي مختلف حذف نويز در سيگنالهاي ديجيتال*تحليل روشهاي فضا- زمان در سيستمهاي مخابرات بي سيم*نهان نگاري صوتي*نهان نگاري تصاوير ديجيتال با استفاده از تبديل موجک *روشهاي تکراري براي جبران اعوجاج ناشي از درونيابي *MAC جهت دار در شبکه هاي بي سيم ad hoc * Taxonomy and Survey of Cloud Computing Systemscloud *storager*محاسبات ابري opnetشبيه سازي شبکه با استفاده از WIP** روشهاي حفاظت از اطلاعات در فرآيند انتقال و دريافت مقايسه بانك هاي اطلاعاتي اسكيوال واوراكل * امنيت ATM- پايگاه داده توزيع شده سيستم مرسولات پستي اداره پست به کمک معماري سرويس گرا و تکنيک model_driven engineering شبيه سازي ns2 *تشخيص چهره انسان به روش تحليل تفکيکي خطي دو بعدي( 2D-LDA به همراه مقاله *تشخيص حرکت از طريق ورودي دوربين يا وبکم* تشخيص کارکتر و عدد در تصوير OCR* تشخيص عدد فارسي در تصوير (به همراه آموزش فارسي)* تشخيص حروف فارسي در تصوير به روش تطبيق الگو* تشخيص حروف فارسي در تصوير به روش شبکه عصبي* شبيه سازي مدولاسيون پالسهاي كدشده PCM* شبيه سازي و بررسي انواع اتصال کوتاه در ژنراتور* شبيه سازي ورقه کردن ف****** شبيه سازي بازوي ربات (به همراه مقاله)* ترميم تصوير Image *طراحي مدارهاي *ابرکامپيوترها*داده هاي با حجم بسياربالا inpainting* ترميم ويدئو Video inpainting** برنامه تشخيص بارکد (پردازش تصوير) اتحاديهخريدكارمندانوخريدكالاهايمشابهبهافراد*بررسي مکانيزم احرازهويت *fcfs*الگوريتم کاهش نويز در تصويرNoise Canceling*بررسي کليه توابع توزيع در متلبDistributions functions* پياده سازي روش گوشه شمال غربي *North-West Corner Method* برنامه تبديل اتوماتيک کد فرترن به متلب بهينه سازي تنش در تراس *پنهان‌نگاريتصاوير يا Steganography با متلب*• بدست آوردن پروفايل دما در سطح مقطع steak در زمان هاي مختلف بعد از قرار گرفتن در ظرف روغن شبيه سازي راکتور batch (ناپيوسته) و رسم نمودار غلظت ها* يكسوساز سه فاز تريستوري با *پروژه ياد*****ري ماشين يا تشخيص جنسيت زن مرد *machine learning**• تشخيص لبه تصوير توسط الگوريتم کلوني مورچه ها ACO (به همراه مقاله) پردازشتصويرWavelet بهبود مدل کاربر در وب¬سايت بصورت خودکار با استفاده ازمعناشناسي با مفاهيم خاص دامنه*پروژه هاي مهندسي معكوس *طراحي سايت b2b تشخيص هويت افراد با استفاد شناساي كف دست *نظرسنجي *الگوريتم پنتيک چندهدفه * • محاسبه جريان درون لوله و عدد رينولدز به کمک روابط سوامي و جين و دارسي-ويسباخ • شبيه سازي کنترل مقاوم عصب* تحليگرلغوي*چندضلعي *جدول متقاطع * فرستادن ايميل *شبيه سازي پروتکل مسيريابي شبکه حسگر بي سيم باآپ نت پروژه هاي تشخيص هويت :عنبه *اثرانگشت *تشخيص چهره به چهره *كف دست * الگوريتم هاي خوشه بندي در شبکه هاي حسگر موبايلعنوان* امضاي ديجيتال**امنيت اطلاعات * بررسي امنيت شبکه در مقوله پدافند غير عامل * بيومتريک (Biometric)*الگوريتم زنبورعسل *دنباله کاوي *شناسايي خط *شناسايي صورت *بينايي ماشين*هوش مصنوعي دربازي *وب معنايي*آنتولوژي *فشرده سازي تصوير*پردازش صوت * امنيت درپايگاه توزيع شده*فايل هاي ويرانگر - - - سيستم فروش و صورتحساب- سيستم حضورغياب با اثر انگشت - سيستم صندوق رستوراني و فروشگاهي با سخت افزار و نرم افزار POS گروه مهندسي پايتخت - انجام پروژه هاي دانشجويي شما با قيمتي مناسب پذيرش سفارش پروژه داخل وخارج ازکشور هرگونه کپي برداري ازآگهي غيرمجازمي باشد جهت سفارش پروژه تماس ب*****ريد ازديگرپروژهاي ماديدبفرماييد www.pcporoje.com http://tezcomputer.com http://tezcomputercom.blogfa.com مهندس خسروي 09191022908 جهت سفارش پروژه يا نياز به هرگونه اطلاع رساني فقط با ايميل زير با مادر تماس باشيد infoporoje.net@gmail.com
          AWS to open HK infrastructure region next year   

Amazon Web Services (AWS) is planning to open an infrastructure region in Hong Kong in 2018, making the city the eighth AWS Region in Asia Pacific.

AWS' launch of the Hong Kong infrastructure region will allow Hong Kong customers to store their data locally, and to build flexible, scalable, secure, and highly available applications.

It will also enable Hong Kong customers to enjoy fast, low-latency access to websites, mobile applications, games, SaaS applications, big data analysis, Internet of Things (IoT) applications, and more.

At launch, the new AWS Region will...

          Big Data Developer - Verizon - Burlington, MA   
Beyond powering America’s fastest and most reliable network, we’re leading the way in broadband, cloud and security solutions, Internet of Things and innovating...
From Verizon - Thu, 29 Jun 2017 10:58:16 GMT - View all Burlington, MA jobs
          Analyst Angle: Real-time analytics to understand QoE and optimize end-to-end performance   

A conversation with Robert Laliberte, VP Marketing, Empirix The below is only a summary. Download a transcript of the complete interview and access the complete report “Mastering Analytics: How to benefit from big data and network complexity” How can mobile operators deal with all the data that they have and optimize their networks, rather than be overwhelmed? How [...]

The post Analyst Angle: Real-time analytics to understand QoE and optimize end-to-end performance appeared first on RCR Wireless News.


          Big Data Solution Architect - GlaxoSmithKline - Upper Providence, PA   
If you require an accommodation or other assistance to apply for a job at GSK, please contact the GSK HR Service Centre at 1-877-694-7547 (US Toll Free) or +1...
From GlaxoSmithKline - Sun, 11 Jun 2017 06:59:54 GMT - View all Upper Providence, PA jobs
          MareNostrum 4: Un nuevo superordenador para Barcelona   
El Barcelona Supercomputing Center estrena el cuarto modelo de su serie Marenostrum, un superordenador clave para el análisis de 'big data' en campos como la astrofísica, la ingeniería, la biomedicina o la física de materiales. Leer
          Software Engineers - Algorithms - Intel - Toronto, ON   
These teams work on some of the hardest infrastructure, big data and NP-hard optimization problems around. As part of Intel, we will continue to apply Moore's...
From Intel - Sat, 17 Jun 2017 10:23:09 GMT - View all Toronto, ON jobs
          [video] @MooseFS Distributed File Systems | @CloudExpo #Cloud #AI #DX #BigData #Storage   
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.

read more


          June 2017 Server StorageIO Data Infrastructures Update Newsletter   

Volume 17, Issue VI

Hello and welcome to the June 2017 issue of the Server StorageIO update newsletter.

For those of you in the northern hemisphere it is time for summer holidays, while in the southern hemisphere its winter time. That means there is a lot going on outside of work, however June has also seen a lot of activity in and around IT data infrastructure along with data centers. Check out some of the industry trends news and updates below.


A quick update following up from the May newsletter is that my new book is now available via Amazon.com, CRC Press and other venues in hardcopy hardcover as well as electronic versions. Think of this as the soft launch with a formal launch and more information being rolled out soon. For now, you can visit the landing page for Software Defined Data Infrastructure Essentials - Cloud, Converged, and Virtual Fundamental Server Storage I/O Tradecraft (CRC PRess/Taylor Francis/Auerbach) at storageio.com/book4 to learn more including view table of contents, preface, how organized among other items.

In This Issue

  • Server StorageIO trends
  • Server StorageIOblog posts
  • Server StorageIO News Commentary
  • Various Events and Webinars
  • Industry Resources and Links
  • Connect and Converse With Us
  • About Us

Enjoy this edition of the Server StorageIO update newsletter.

Cheers GS

Data Infrastructure and IT Industry Activity Trends

Some recent Industry Activities, Trends, News and Announcements include:

Cavium announced 10, 25, 50 and 50Gbps Ethernet server storage I/O NIC solutions (e.g. FastLine 41000 series).

The NVMe Express trade group (e.g. nvmexpress.org) announced the completion of NVMe 1.3 specification. New optional features include support for mobile platforms and book, along with scaling for enterprise as well as cloud environments. Learn more about specifications at the NVMexpress.org site as well as more NVMe material at thenvmeplace.com.

Keep in mind that if the answer is NVMe, what are the questions along with various options from front end to back-end, NVMe and PCIe, NVMeoF, U.2/8639, M2/NGFF among others.

The Fibre Channel Industry Association announced FC-NVMe interoperability plugfest and Gen 6 32GFC activity to support next generation data infrastructures and data centers.

Storage vendor Tegile announced they are joining the growing ranks of vendors adding NVMe support with their InteliFlash OS 3.7 along with other enhancements.

For those of you who are involved with Windows Servers environments along with server, storage and I/O networks, check out Darryl VanderPeijl multi-part series on RDMA, DCB, PFC, ETS and related topics.

HPE and Hedvig announced solutions combing forces to address hybrid cloud storage needs.

IBM and Cisco announced enhancements around their converged (Cisco powered servers) solution for VDI and Hybrid cloud workloads.

Big Data and Analytics vendor Mapr announced enhancements to their converged data management platform for cloud scale data fabrics.

Panzura has enhanced its Freedom software defined storage management solution with version 7 to support expanded unstructured data growth while easing management functions, along with performance updates.

Red Hat announced Ceph Storage 2.3 including Ceph 10.2 (Jewel) combing an NFS gateway.

Scality announced enhancements to its Ring software defined storage cloud and object solution including enhanced security along with data protection capabilities.

Check out other industry news, comments, trends perspectives here.

 

Server StorageIOblog Posts

Recent and popular Server StorageIOblog posts include:

  • GDPR (General Data Protection Regulation) Resources Are You Ready?
  • Microsoft Windows Server, Azure, Nano Life cycle Updates
  • AWS S3 Storage Gateway Revisited (Part I)
  • Part II Revisting AWS S3 Storage Gateway (Test Drive Deployment)
  • May 2017 Server StorageIO Data Infrastructures Update Newsletter

View other recent as well as past StorageIOblog posts here

Server StorageIO Commentary in the news

Recent Server StorageIO industry trends perspectives commentary in the news.

Via EnterpriseStorageForum: 5 Hot Storage Technologies to Watch
Storage can be held back by slow I/O performance, which caused expensive compute resources and memory to be consumed. NVMe reduces wait time while increasing the amount of effective work, enabling higher-profitability compute. The storage I/O capabilities of flash can be fed across PCIe faster to enable multi-core processors to complete more useful work in less time.

Via EnterpriseStorageForum: 10-Year Review of Data Storage
The adoption of hybrid cloud and hybrid converged server storage has appeared more rapidly than many expected. And despite firm pronouncements of their demise, FC, tape and HDD are still very much with us.

Via CDW: Your IT Department Can Help Your Companys Bottom Line Heres How
Not only are the servers more robust performance wise, but they�ve got more compute capability, can handle more workloads, have more memory and also have better resiliency.

Via EnterpriseStorageForum: Top 10 Tips for Software-Defined Storage Deployment
Dell 14g PowerEdge Servers give you greater compute and IO capability, as well as the density you need, NVMe and 25 Gig Ethernet on board,

Via CDW: Meeting IoTs Demands for Networking

View more Server, Storage and I/O trends and perspectives comments here

Events and Activities

Recent and upcoming event activities.

Sep. 13-15, 2017 - Fujifilm IT Executive Summit - Seattle WA

August 28-30, 2017 - VMworld - Las Vegas

June 22, 2017 - Webinar - GDPR and Microsoft Environments

May 11, 2017 - Webinar - Email Archiving, Compliance and Ransomware

See more webinars and activities on the Server StorageIO Events page here.

Server StorageIO Industry Resources and Links

Useful links and pages:
Microsoft TechNet - Various Microsoft related from Azure to Docker to Windows
storageio.com/links - Various industry links (over 1,000 with more to be added soon)
objectstoragecenter.com - Cloud and object storage topics, tips and news items
OpenStack.org - Various OpenStack related items
storageio.com/protect - Various data protection items and topics
thenvmeplace.com - Focus on NVMe trends and technologies
thessdplace.com - NVM and Solid State Disk topics, tips and techniques
storageio.com/converge - Various CI, HCI and related SDS topics
storageio.com/performance - Various server, storage and I/O benchmark and tools
VMware Technical Network - Various VMware related items

Ok, nuff said, for now.

Cheers
Gs

Greg Schulz - Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2017 Server StorageIO(R) and UnlimitedIO All Rights Reserved


          Кассовый надрыв: как бизнес переходит на новую фискальную технику   

С 1 июля 2017 года предприниматели должны перейти на онлайн-кассы, которые, по мнению законодателей, сделают рынок более прозрачным. Бизнесмены говорят, что параллельно его ждет рост цен и возвращение серых схем Фото: Анатолий Жданов / «Коммерсантъ» Черным по белому Поправки к федеральному закону 54 «О применении контрольно-кассовой техники при осуществлении наличных денежных расчетов», подписанные Владимиром Путиным 3 июля 2016 года, обязали всех предпринимателей заменить традиционные кассовые аппараты онлайн-кассами, которые упрощают процесс передачи данных в Федеральную налоговую службу. Инициатива не встретила поддержки у бизнеса: деловое сообщество утверждало, что замена одной кассы обходится в десятки тысяч рублей, усложняет работу продавца, а льготные налоговые режимы в новых условиях фактически теряют свой смысл. Когда первые добровольцы начали внедрять онлайн-кассы, возникла новая проблема: оказалось, что не хватает фискальных регистраторов, необходимых для работы «умных» касс. «Вокруг поправок до сих пор существует очень мощный негативный фон: «непродуманно», «тяжело», «сами не разберемся, нужно кого-то нанимать», «нет техники», — говорит Максим Митусов, руководитель проекта «МодульКасса». Заявленная властями цель — сделать рынок розничной торговли в России прозрачнее. Действительно, если налоговая служба сможет контролировать движение средств в режиме реального времени, то ей легче будет оперативно пресекать работу по серым схемам. Но благие намерения могут выйти боком, отмечает президент Ассоциации компаний интернет-торговли (АКИТ) Алексей Федоров. По его мнению, часть интернет-магазинов может перестать принимать карты и начать продавать товар «мимо кассы», не выдавая чеков покупателям. «Общее движение в сторону обеления рынка вполне может обернуться его очернением», — согласна исполнительный директор платежной системы Robokassa Татьяна Глазачева. ФНС к такому сценарию готова, считает Федоров. По его прогнозам, для «очистки» рынка понадобится максимум полгода. «В первое время после вступления закона в силу ФНС будет активно проверять его исполнение и черным игрокам придется выплатить штрафы, а потом и вовсе уйти с рынка», — уверен он. Штраф за работу без онлайн-кассы для ИП составит 10 тыс. руб., для ООО — 30 тыс. руб. или будет равняться цене покупки, если она превышает указанные суммы, уточняет Ольга Пономарева, управляющий партнер группы юридических и аудиторских компаний СБП. Чем онлайн-кассы отличаются от обычных Онлайн-касса, или фискальный регистратор, в отличие от обычного кассового аппарата оснащена специальным фискальным накопителем. Это устройство заменяет блок ЭКЛЗ (эта аббревиатура означает «электронная контрольная лента защищенная»). Фискальный накопитель фиксирует все операции, проходящие через кассу, и в режиме реального времени отправляет данные оператору фискальных данных (ОФД), а тот передает их в ФНС. Для этого касса должна быть подключена к интернету. Фискальный накопитель придется менять раз в 13 или 36 месяцев (для предпринимателей, работающих на схемах вмененного дохода (ЕНВД) и патенте соответственно). На чеке помимо стандартного набора реквизитов магазина появятся QR-коды, отсканировав которые покупатель сможет убедиться, действительно ли его чек учтен. «Каждое физическое лицо таким образом становится практически сотрудником налоговой инспекции», — отмечает президент АКИТ. Оформляя покупку, продавец должен спросить клиента, хочет ли тот получить электронную версию чека. В случае согласия продавец вводит в кассу адрес электронной почты покупателя или номер его мобильного телефона, на которые затем отправляется онлайн-чек. Кассовые книги и Z-отчеты, составляемые в конце смены, таким образом, уйдут в прошлое. Первые среди равных Реформа кассовой сферы проходит в несколько этапов. Добровольно перейти на онлайн-кассы компании могли с момента вступления поправок в силу, то есть с июля прошлого года. С февраля 2017 года налоговые инстанции прекратили регистрировать аппараты старого образца. Полностью перейти на «умные» кассы большинство предпринимателей должны с 1 июля текущего года. Годовую отсрочку получили те, кто работает по ЕНВД и на патенте, они должны совершить переход на онлайн-кассы до 1 июля 2018 года. В итоге кассами должны будут обзавестись все, даже те, кто прежде мог торговать, не выбивая кассовых чеков. Таким правом как раз пользовались ИП и ООО, работающие по ЕНВД и патентам. «Подобный льготный режим создавался как раз для выведения бизнеса из тени и увеличения собираемости налогов, — отмечает Мария Желтова, руководитель направления банковского консалтинга группы «Беспалов и партнеры». — Теперь вопрос, работать с кассой или без, не стоит, и большинство малых предпринимателей, скорее всего, опять уйдут в тень». Для многих магазинов такой поворот станет серьезным ударом. «Больше 90% нашей продукции реализовывается франчайзинговыми точками, а они платят налог с квадратного метра магазина. Это позволяло нам сдерживать торговую надбавку на продукцию», — говорит Владимир Денисенко, генеральный директор обувной фирмы «Юничел». Переход на онлайн-кассы существенно повысит расходы точек, что повлечет за собой повышение розничных цен в среднем на 10%, прогнозирует Денисенко. Придется устанавливать кассы и интернет-магазинам. «В предыдущих версиях 54-ФЗ интернет-магазины не упоминались вовсе, но новая редакция закона буквально накрыла с головой онлайн-розницу», — отмечает Оксана Кобзева, эксперт проекта «Контур.ОФД». По ее словам, законодатели обязали интернет-магазины выдавать покупателям чеки нового типа, но не учли особенности работы этого рынка. Дело в том, что многие небольшие онлайн-продавцы пользуются услугами сторонних курьерских служб — курьеры не только развозят товары, но и принимают деньги, а затем уже переводят их интернет-продавцу. Единственное исключение — когда покупатель сразу оплачивает товар в интернете банковской карточкой. Изменение правил игры может привести к тому, что небольшой онлайн-ретейл откажется от приема банковских карт, чтобы не тратиться на кассы. Владельцы магазинов, торгующих алкоголем, должны были установить онлайн-кассы до 31 марта 2017 года. «Мы поддерживаем курс государства на обеление рынка, но конкретно этот закон дискриминирует нашу категорию и усложняет жизнь нашим партнерам», — говорит Ораз Дурдыев, директор по правовым вопросам и корпоративным отношениям компании «САН ИнБев», которая выпускает пиво Bud, Stella Artois, «Сибирская корона» и др. «В весенне-летний сезон продажа алкоголя, особенно пива, возрастает в разы, а законодатель ищет любые варианты, чтобы пополнить бюджет», — предполагает адвокат Александр Редькин. Фото: Александр Николаев / ТАСС Онлайн-кассы в цифрах 3,5 млн онлайн-касс должно работать в России к июлю 2018 года В 20-50 тыс. руб. обойдется предпринимателю замена традиционной кассы на ее онлайн-версию Около 5 тыс. руб. стоила стандартная версия традиционного кассового аппарата 100-120 млрд руб. – объем рынка новых кассовых аппаратов и ПО к ним 36 млрд руб. будет тратиться на замену фискальных накопителей и обслуживание касс ежегодно Источники: «МодульКасса», «Мерката», расчеты РБК Цена реформы Для малого и микробизнеса поправки обернутся существенной финансовой нагрузкой, а некоторым предприятиям и вовсе грозят закрытием, считает Мария Желтова из группы «Беспалов и партнеры». Затраты на модернизацию касс действительно немалые: по оценкам Максима Митусова из «МодульКассы», замена одного кассового аппарата обойдется в среднем в 40 тыс. руб., при том что касса старого образца стоила от 5 тыс. руб. Умную кассу придется еще и обслуживать, а фискальный накопитель регулярно менять. Крупным сетям придется изъять из оборота значительно больше средств: по словам основателя и управляющего сети кафе быстрого питания «Теремок» Михаила Гончарова, на модернизацию 400 касс компания потратила около 5 млн руб. В эту сумму входит приобретение новых кассовых аппаратов с фискальными накопителями (или модернизация старых) и программного обеспечения, с помощью которого данные будут передаваться в ФНС. Но это далеко не все затраты. Многие предприниматели просто не понимают, что от них требуется, и готовы нанимать сторонних специалистов для установки и настройки касс. «Малые предприятия открывают сейчас очень много вакансий, связанных с реализацией этого нововведения. Компаниям требуются программисты, аналитики, инженеры. Безусловно, это увеличивает фонд оплаты труда», — отмечает Наталья Сторожева, генеральный директор центра развития бизнеса и карьеры «Перспектива». Государством предусмотрены вычеты в счет затрат на новые аппараты, но не для всех, да и покрыть расходы на установку касс они не смогут, говорит исполнительный директор регионального отделения «Опоры России» Изабелла Атласкирова. В частности, рассчитывать на компенсацию в размере 18 тыс. руб. могут предприниматели, которые работают на ЕНВД или патенте и установят кассу в 2018 году. Непреодолимые препятствия на пути к новой системе ждут бизнесменов в отдаленных регионах, предупреждает Алексей Головченко, управляющий партнер юридической компании «Энсо». Онлайн-кассы смогут передавать данные в ФНС только при наличии интернет-сигнала, с которым в глухих деревнях, да и городских магазинах, расположенных в подвальных помещениях, туго. «Эта проблема ложится на плечи предпринимателя. Хочешь работать — устанавливай спутниковую антенну», — вздыхает Головченко. Приходится адаптироваться: например, владелице кафе и магазина «Белый мишка» в Орле Надежде Билецкой пришлось заплатить за интернет дважды. «Линия для кассы нужна отдельная и с защитой, поэтому мне пришлось тянуть два подключения для кафе, где у меня еще и бесплатный Wi-Fi для посетителей», — рассказывает она. По словам Билецкой, тарифы на обслуживание в небольших городах значительно выше, чем в столице, а сервис оставляет желать лучшего из-за низкой конкуренции среди поставщиков услуг. Многие региональные предприниматели выходят из положения так: днем пробивают чеки без подключения к Сети, а вечером находят точку доступа и передают данные в ФНС. Как на такие схемы будут смотреть налоговики после 1 июля, пока не понятно. Еще одна проблема — обучение персонала работе на новых кассах. Ведь продавец должен будет спрашивать каждого покупателя о желании получить электронный чек. Это сильно усложнит и замедлит процесс продажи, считает Мария Желтова из группы «Беспалов и партнеры». «Компании, на которые распространяется действие закона, абсолютно разноплановы, от магазина в городе-миллионнике до бабушки из деревни. Уровень технической грамотности продавцов отличается в разы, — отмечает она. — Скорее всего, многих просто не успеют научить вносить данные клиента, а оформление возвратов станет отдельной головной болью». При этом уже налицо дефицит фискальных накопителей — устройств для шифрования и защиты фискальных данных, без которых работа онлайн-касс невозможна. Сейчас по новым правилам работает чуть более 600 тыс. касс, подсчитали в АКИТ. Это примерно половина из действующих кассовых аппаратов. Оплаченных, но еще не поставленных касс в стране около 200–300 тыс., или четверть рынка, говорит Максим Митусов. В ФНС говорят о 850 тыс. зарегистрированных аппаратов в конце июня. Сами предприниматели отмечают серьезный дефицит оборудования даже в Москве. «Наши франчайзи на Богословском переулке были вынуждены ждать кассовое оборудование больше 35 календарных дней, — жалуется Полина Кирова, директор по развитию ТМ «Рыбсеть». — Это очень большой срок для розничной торговли». По ее словам, подключение кассы затягивается на несколько часов, тогда как обычный кассовый аппарат можно было установить очень быстро. Фискальные накопители производят всего три компании в стране — ЗАО «Атлас-карт», ООО «Рик» и ЗАО «Безант». Эту проблему поднял на совещании Госдумы в марте этого года депутат Андрей Луговой. По его словам, очередь на фискальные накопители расписана на несколько месяцев вперед, а цены на них доходят до 6–8 тыс. руб. при себестоимости в 700–800 руб. Такая ситуация сложилась из-за решения ФСБ, которая сертифицировала всего один образец фискального накопителя. «Загвоздка заключается в производстве основного компонента фискального накопителя — ключевого чипа. Именно к этой детали самые строгие требования, соответствовать которым довольно сложно», — поясняет адвокат Сергей Ледовских. Правда, штрафовать добропорядочных бизнесменов, которым не хватило накопителей, пока не будут. В конце мая Минфин и ФНС выступили с письмом, в котором говорится, что если торговая точка заключила договор на поставку кассовой техники в «разумные сроки», то санкции к ней применяться не будут. «Бизнес сделал все, что от него зависело, и дальше остается только ждать», — резюмирует Максим Митусов. Новый рынок услуг Поправки к законопроекту дали жизнь новой нише на рынке контрольно-кассовых аппаратов. Осенью 2016 года появились первые операторы фискальных данных — компании-посредники между предпринимателями и ФНС, которые принимают, обрабатывают и передают данные с онлайн-касс в налоговую службу. С одной из десяти таких компаний (их перечень представлен на сайте ФНС) владельцы касс должны заключить договор на предоставление услуг. Сами кассы нового типа и ПО для них продает 20–30 компаний — это как опытные игроки рынка, так и стартапы, которые пытаются оседлать новую волну. Гнаться есть за чем: по оценкам РБК, в России зарегистрировано около 1,8 млн касс, из них активно используется порядка 1,1 млн. Если учесть те компании, что пока не обязаны использовать кассовые аппараты, к 2018 году нужно будет модернизировать около 3,5 млн единиц техники. По расчетам Максима Митусова («МодульКасса»), в денежном выражении объем этой ниши составит 100–120 млрд руб. при единовременной замене аппаратов. А обслуживание и поставка нового оборудования могут приносить еще 35–36 млрд руб. ежегодно, подсчитал Антон Еликов, эксперт центра поддержки малого предпринимательства «Мерката». Плюсы после минусов При всех очевидных минусах новая кассовая система принесет малым предприятиям ощутимую пользу, уверен Максим Митусов: «Через боль предприниматели получат массу новых возможностей. Большинство предпринимателей сейчас ведут учет в лучшем случае в Excel, а то и просто в тетрадке. Автоматизация позволит разобраться, как на самом деле идут дела, каков объем продаж». Сократятся проверки малого бизнеса, прогнозирует адвокат Сергей Ледовских: финансовая деятельность станет прозрачной, скрыть серые доходы будет намного сложнее. «Выездные проверки станут не просто точечными, а высокоточными. Если пришли из ФНС, значит точно что-то нечисто, — говорит Максим Опилкин, директор по развитию компании Grotem (разработчик мобильного приложения для онлайн-касс). — Например, подозрительно выглядит ресторан, который пробивает 20 чеков в день, тогда как через дорогу похожий ресторан выдает 120 чеков. Теперь big data на службе у ФНС». Информация, которую будут собирать онлайн-кассы, поможет компаниям выстроить эффективную маркетинговую стратегию, считает IT-директор Inventive Retail Group Евгений Бахин. «Электронная почта или номер телефона — это ведь ключ к истории покупок клиента. С их помощью можно таргетировать аудиторию и подготовить индивидуальные предложения для каждого покупателя, — уверен он. — Раньше такие возможности были ограничены кругом участников программ лояльности». Бóльшая часть магазинов воспринимает новый закон как обременение, на которое по доброй воле они бы сейчас не пошли, отмечает Антон Еликов из «Меркаты». «На самом же деле автоматизация для традиционной розницы — это реальный шанс привлечь как покупателей, так и поставщиков и успешно конкурировать с крупными сетями, — считает он. — Тех, кто сумеет этот процесс пережить, ждет новый виток развития». Источник новости

Запись Кассовый надрыв: как бизнес переходит на новую фискальную технику впервые появилась События дня - InfoRU.news.


          (Senior) Consultant Big Data Management - English - Deloitte - Noord-Holland   
You completed a relevant Master of Arts / Master of Science (like Computer Science, Information Science or Technical Business Administration)....
Van Deloitte - Tue, 06 Jun 2017 08:15:43 GMT - Toon alle vacatures in Noord-Holland
          Web-Scale Converged Infrastructure Designed to Simplify IT   
Download this resource to key into the specs of one hyper-converged infrastructure, and discover why its implementation is ideal for supporting multiple, virtual workloads such as VDI, private cloud, database, OLTP and data warehouse as well as virtualized big data deployments. Published by: Dell EMC and Nutanix
          Praktikant/Werkstudent (w/m) Data Analyst - PwC - Hannover   
Du würdest dich gern mit Business Intelligence und/oder Big Data Technologien auseinandersetzen und mehr über das Zusammenspiel zwischen IT und Wirtschaft...
Gefunden bei PwC - Sun, 26 Mar 2017 04:17:11 GMT - Zeige alle Hannover Jobs
          Sap: è il momento del Cio empowerment   

Azienda digitale? Vorrei, ma non riesco. La ricerca Skills for Digital Transformation, realizzata dalla Technical University di Monaco e Sap, mostra un quadro critico: le aziende sanno dell’importanza del digitale, ma mancano di competenze specifiche e non riescono ad adeguarsi al cambiamento. In questo modo il business diventa fragile e la digital transformation passa da opportunità a minaccia.
L’indagine è stata condotta tra luglio e ottobre 2015 su 81 dirigenti esecutivi, dei quali 53 sono in posizione di Cio. Gli intervistati provengono da 16 nazioni, con una composizione piuttosto interessante: 28% Cina, 26% Germania, 10% Italia ed Argentina e 5% Australia, la prima anglofona. Il 27% delle aziende rappresentate fattura meno di 250 milioni di dollari, il 40% sopra il miliardo. Il quadro sulla penetrazione della DT è molto chiaro, insieme ad alcune ricette per accelerare l’adozione dell’innovazione.

Sappiamo, ma non possiamo
Secondo la ricerca il 27% degli intervistati ha affermato di avere un piano per attuare le proprie strategie digitali, ma solo il 17% ha riconosciuto che i propri dipendenti abbiano le competenze digitali giuste per guidare la nuova evoluzione del business. Si tratta di un approccio che solleva più d’una perplessità: ad esempio quasi il 73% dei rispondenti conferma l’importanza di analytics e big data, ma solo il 39% dichiara di possedere le competenze necessarie, per cui il parere dato è generale e non specifico sull’attività svolta.
Al centro della trasformazione digitale vi è un cambiamento culturale nel modo in cui le aziende pensano e operano. Il successo per le imprese non risiede tanto nelle tecnologie, ma in una nuova filosofia di collaborazione digitale tra tutti i dipendenti, indipendentemente dal ruolo e livello. Lo studio mostra che una profonda comprensione degli obiettivi di business tra le diverse divisioni e un uso esteso della tecnologia sono fattori chiave per consentire alle aziende di cavalcare con successo l’onda della trasformazione digitale. In aggiunta, tecnologia e strategie devono essere utilizzate sinergicamente per promuovere innovazione e crescita.
Più dell’80% dei Cio intervistati ha affermato che saper guidare il cambiamento rappresenta la seconda caratteristica più importante per avere successo nel nuovo scenario della digital transformation, subito dopo la sicurezza. La digitalizzazione mostra la propria forza disruptive minacciando il loro business model nel 38% dei casi, ma solo il 35% dei rispondenti afferma che la loro azienda ha una definita strategia di DT. “La digital transformation ha una ricaduta diretta sia sui risultati di business che sulle infrastrutture It”, ha commentato Helmut Krcmar, Presidente dell’Information Systems del Dipartimento di Informatica della Technical University di Monaco; “la nostra analisi sottolinea la necessità di formare dipendenti e manager in maniera trasversale e, al contempo, attirare nuovi talenti qualificati: noi vogliamo mostrare alle organizzazioni l’importanza delle competenze digitali per colmare il gap e stimolare la formazione in questa direzione”.

Formazione a ciclo continuo
Per una trasformazione digitale di successo è ovviamente cruciale una conoscenza approfondita in settori quali sicurezza, gestione del cambiamento e gestione delle reti aziendali: ecco perché il ruolo degli esecutivi It deve diventare molto più strategico ed essere connesso al corretto sfruttamento dei talenti aziendali.
Le aziende devono attuare strategie vincenti per cavalcare in modo efficace il cambiamento”, ha commentato Bernd Welz, Executive Vp e Responsabile Globale Scale, Enablement and Transformation di Sap; “coltivare i talenti è fondamentale, proprio mentre la carenza di competenze digitali è significativa, per cui le organizzazioni sono chiamate a colmare questa lacuna”. Sulla formazione interna l’offerta Sap è molto forte, articolata sul Mooc openSap, sul Learning Hub specifico per l’offerta Sap e l’Academy Cube che mette a disposizione e-learning ed offerte di lavoro.
Per sfruttare a proprio vantaggio la DT, quindi, le ricette principali devono essere legate agli individui. Per la prima ricetta bisogna dare al Cio responsabilità strategiche che oggi non ha, per guidare l’azienda alla DT prima che sia troppo tardi. Per la seconda è necessario un forte incremento di attenzione al ciclo di vita del talento interno.

 

L'articolo Sap: è il momento del Cio empowerment è un contenuto originale di 01net.


          How data can make dairy farming more profitable   
Volatility in the dairy industry is here to stay, experts say, and the big data revolution can help farmers make their businesses more profitable.
          Great resource: Calling Bullshit in the Age of Big Data - free/open lectures, tools and case-studies from Washington U Spring 2017 class   

Really cool: the University of Washington had a Spring 2017 class entitled “Calling Bullshit in the Age of Big Data” and makes available the lecture materials at [url=http://www.callingbullshit.org]http://www.callingbullshit.org[/url] (Syllabus, Videos, Tools, Case Studies, FAQ). Michelle Nijhuis wrote a post about it at The New Yorker: How to Call B.S. on Big Data: A Practical Guide (3 June 2017).

Lecture titles:

* Introduction to bullshit
* Spotting bullshit
* The natural ecology of bullshit
* Causality
* Statistical traps
* Visualization
* Big data
* Publication bias
* Predatory publishing and scientific misconduct
* The ethics of calling bullshit.
* Fake news
* Refuting bullshit


          CENTER SALES AND SERVICE ASSOCIATE-CARE   
ID-Idaho Falls, CenturyLink (NYSE: CTL) is a global communications and IT services company focused on connecting its customers to the power of the digital world. CenturyLink offers network and data systems management, big data analytics, managed security services, hosting, cloud, and IT consulting services. The company provides broadband, voice, video, advanced data and managed network services over a robust 265,
          India-Based Infosys Plans To Hire Thousands of U.S. Workers For New U.S. Locations   

Amid criticism of outsourcing firms, at least one large Indian outsourcing company is planning to hire 10,000 U.S. workers over the next two years. Infosys CEO Vishal Sikka announced the company will open four technology and innovation hubs in the U.S. "focusing on cutting-edge technology areas, including artificial intelligence, machine learning, user experience, emerging digital technologies, cloud, and big data."


          The Big Data Storymap – InFocus Blog   
I wanted to share some recent work that we have been doing inside EMC Global Services, to create a “Big Data Storymap” that would help clients understand the big data journey in a pictorial format. The goal of a storymap is to provide a graphical visualization that uses metaphors and themes to educate our clients …
          DevOps Engineer - GlaxoSmithKline - Upper Providence, PA   
Maintain clear documentation to help increase overall team productivity. Big Data Experience:....
From GlaxoSmithKline - Fri, 19 May 2017 18:39:00 GMT - View all Upper Providence, PA jobs
          Big Data Developer - Verizon - Burlington, MA   
Experience with one of Storm or Apache Spark Experience with NOSQL databases ( Preferably Mongo DB or Cassandra) Ability to adapt and learn quickly in fast...
From Verizon - Thu, 29 Jun 2017 10:58:16 GMT - View all Burlington, MA jobs
          Data Engineer - CreditCards.com - Austin, TX   
Bankrate Credit Cards is seeking an experienced big data engineer. We help people get the most out of their money through smart credit card recommendations
From CreditCards.com - Fri, 12 May 2017 22:18:08 GMT - View all Austin, TX jobs
          Data-analyse voor winkelier geen doel op zich   

Big Data. Het is vermoedelijk een van de meest gebruikte begrippen in de detailhandel (en daarbuiten), als het gaat om de analyse van grote datastromen. Grote vraag: is data-analyse voor een winkelier noodzakelijk om de omzet ook in de toekomst op peil te houden? Wanneer de term ‘Big Data’ niet wordt vertaald in concrete toepassingen, … Vervolgt

Het bericht Data-analyse voor winkelier geen doel op zich verscheen eerst op Insights.


          Web Marketing Coordinator with AdWords Experience - StrongBox Data Solutions - Quebec   
Coordinate social media outreach. Looking to be part of the future in Big Data management?... $40,000 - $50,000 a year
From StrongBox Data Solutions - Fri, 24 Mar 2017 20:57:05 GMT - View all Quebec jobs
          COL - EJECUTIVO VENTA ESPECIALISTA BIG DATA - Telefonica - Bogotá, Cundinamarca   
Manejar la relación comercial local con socios y proveedores, Colaborando con el equipo de alianzas y asociaciones mundiales....
De Telefónica - Fri, 30 Jun 2017 13:16:47 GMT - Ver todos: empleos en Bogotá, Cundinamarca
          COL - Profesional Desarrollo de Negocio / Producto Big Data - Telefonica - Bogotá, Cundinamarca   
Manejar la relación comercial local con socios y proveedores, Colaborando con el equipo de alianzas y asociaciones mundiales....
De Telefónica - Fri, 30 Jun 2017 13:16:47 GMT - Ver todos: empleos en Bogotá, Cundinamarca
          Big Data Solution Architect - GlaxoSmithKline - Upper Providence, PA   
This role requires interaction with process owners across Pharma R&amp;D teams. Experience designing and building complex, high performance platforms to support Big...
From GlaxoSmithKline - Sun, 11 Jun 2017 06:59:54 GMT - View all Upper Providence, PA jobs
             
Chapter Two  of  Proposed Book:  Universal Care, Multiple Payer In, Single Pay Out , But Who Shall Pay? For What? And  Who Shall Deliver Care?

Shall it be a centralized single-payer government system?   

Shall it be through increased income and payroll taxes, or perhaps,  a value-added tax,  applied to all consumer goods except food, purchased by the poor, the rich, and everybody in between?

Shall it be by open-ended entitlement programs - expansion of Medicare, Medicaid,  ObamaCare subsidies, and those in other government programs?

Shall it be Medicare or Medicaid for all, fulfilling the dream Bernie Saunders and other socialist dreamers?

Shall it be paid disproportionally by redistribution- mandates on  the rich, the young, and the healthy?

Shall it be paid by taxing the profits of organizations in the so-called medical industrial complex  and other for-profit entities?

Finally. Shall it even be necessary if the universal basic income for all Americans, as is now being proposed, becomes a reality?

Questions

These are some of the questions that haunt the body politic and the American people.
These are questions I’ve been writing about since I graduated from Duke Medical School in 1960 and completed my pathology residency in 1965.  

That  was  the year  Medicare and shortly thereafter Medicaid passed.  Together these two programs cost taxpayers over $1 trillion and consume huge chunks of the $3.2 trillion government spent on health care in 2016 with no end in sight.

Over the last 50 years, I have practiced medicine,  served at editor of the Minnesota State Medical Journal, and several national newsletters (The PHO Report, the Reece, Report, and Physician Practice Options), formed an integrated physician-hospital organization,  composed over 4000 Medinnovation and Health Reform blogs,  and written ten books on health reform.  

In addition, on the innovation front,  I have initiated and developed an Internet-based differential diagnosis program and a health measurement program .  The differential diagnosis program correctly identified over 80% of diagnoses and was issued in hundreds of thousands of reports over a six year period.  The health measurement report, which identified patients as below normal, normal, or above average , depending on whether they fell inside or outside a normal range of 80 to 120.  In  study of over 4000 Oklahoma state employees,  the average HQ was 77, largely due to obesity ,  hypertension, diabetes, pre-diabetes,  and dyslipidemias.  From these studies, I concluded it was theoretically possible to establish the diagnoses of over 90% of patients before seeing the doctor, and one could measure the health of large populations of patients and how they could improve their health.

My books have dealt with the corporate transformation of medicine,  physician shortages,   the successes and failures of ObamaCare, and the tangled politics of health reform.

As I write the future of health reform is as uncertain, and the ideological clashes  surrounding this reform  are  as deep and divisive as ever, as shown in these two editorials in the Wall Street Journal and the New York Times, which are, of course, on opposite ends of the ideological spectrum.

·        “The liberal solution to every government failure is always more government.   The California single-payer-bill reflects the left’s Platonic ideal, with the promise of free care for everyone for everything.  Patients would be entitled to an essentially unlimited list of benefits including acupuncture and chiropractor care as well as all medical care determined to be appropriate by the member’s  health care provider.  They could see any specialist without referral.  There would be no restraints on health care utilization or costs.  Patients could get treated for all maladies by any physicians at no cost.”(Wall Street Journal editorial,  “California Single Payer Dreaming,” May 27-28, 2017).

Not to be outdone,  the New York Times editorial board had its say.
“Any doubts about the senseless cruelty underlying the health care agenda put forward by  President Trump was put to rest last week two government documents. One document was the Congressional Budget Office’s detailed analysis of the Trumpcare  bill passed by the House this month.  The budget proposed billions of dollars of cuts to programs that funded research into new cures,  protects the country against infectious disease and provides care for the poor, elderly, and people with disabilities.  The CBP analysis said Trumpcare would rob 23 million people with health insurance while leaving millions of others with policies that offer little protection from major medical condition.   All of which would be done in service of huge tax cuts for the richest Americans.”(New York Times editorial,  “Trumpcare’s Cruelty, Reaffirmed, May 28, 2017).

There you have it – the gulf, chasm, and abyss between two conflicting ideological opponents  talking past on another while blaming each other. 

There is a third school of thought about  how  to bridge  the gulf – machines  bearing artificial intelligent and elegant algorithms  to measure outcomes to show who is right and who is wrong by using data to supplement and even replace faulty  human nature.  Machines, in other words, can become human, and humans can become machines.   Machines have their own set of problems.  They are designed by humans, big data is not knowledge or wisdom, brains are often more reliable than machines, and data alone often infringes upon privacy, security, and confidentially between patients and physicians.

But no matter what ideology you subscribe to and no matter what technology you use to enhance efficiency, the question remains: who shall pay?   Three states have had a stab at introducing single payer to achieve universal coverage -  Vermont,  Colorado, and California.   All have failed  because political leaders of each state have come to grips with the realities that single payer costs would be prohibitive, requiring state government to raise taxes and employers to raise payroll taxes to levels their citizens would not accept.  In California,  the single-pay cost would be at least $400 billion annually . 

Besides, superimposing single payer on the present structure would be unbelievingly disruptive to hospitals, medical supply chains,  and the 16 people needed to support each individual physician.   Still  40% of Democrats favor single-payer but just 28% of all Americans favor such a move. In California, about half the money to support single-payer would come from existing public money spent on health care.  The rest would come from taxes, in a state which already has the highest state income tax at 13% in the4 nation.  A handful of aspiring politicians in other states –New York, New Jersey, Rhode Island, and Massachusetts have proposed single payer bills , but the appetite is not yet there for the country outside of California and the upper East Coast.

          A MapReduce-based Approach to Scale Big Semantic Data Compression with HDT   
Data generation and publication on the Web has increased over the last years. This phenomenon, usually known as “Big Data”, poses new challenges related with Volume, Velocity, and Variety (“The three V's”) of data. The Semantic Web offers the means to deal with variety, where RDF (Resource Description Framework) is used to model data in the form of triples subject-predicate-object. In this way, it is possible to represent and interconnect RDF triples to build a true Web of Data. Nonetheless, a problem arises when big RDF collections must be stored, exchanges, and/or queried because the existing serialization formats are highly verbose, hence the remaining Big Semantic Data challenges (volume and variety) are aggravated when storing, exchanging, or querying big RDG collections. HDT addresses this issue by proposing a binary serialization format based on compact data structures that allows RDF to be compressed, but also to be queried without prior decompression. Thus, HDT reduces data volume and increases retrieval velocity. However, this achievement comes at the cost of and expensive RDF-to-HDT serialization in terms of computational resources and time. Therefore, HDT alleviates velocity and volume challenges for the end user, but moves Big Data challenges to the data publisher. In this work we show HDT-MR, a MapReduce-based algorithm that allows RDF datasets to be serialized to HDT in a distributed way, reducing processing resources and time, but also enabling larger datasets to be compressed.
          Web Marketing Coordinator with AdWords Experience - StrongBox Data Solutions - Quebec   
Coordinate social media outreach. Looking to be part of the future in Big Data management?... $40,000 - $50,000 a year
From StrongBox Data Solutions - Fri, 24 Mar 2017 20:57:05 GMT - View all Quebec jobs
          InfosysVoice: Now that data's going really big, what's next?   
Sponsor PostEach year, we add two billion hexabytes (that’s two billion billion bytes) of data just in social conversation. American companies, across 15 sectors, with more than 1,000 employees each store on an average more data than the U.S. Library of Congress. Move over, Information Age, the era of big data is [...]
          COL - EJECUTIVO VENTA ESPECIALISTA BIG DATA - Telefonica - Bogotá, Cundinamarca   
Manejar la relación comercial local con socios y proveedores, Colaborando con el equipo de alianzas y asociaciones mundiales....
De Telefónica - Fri, 30 Jun 2017 13:16:47 GMT - Ver todos: empleos en Bogotá, Cundinamarca
          COL - Profesional Desarrollo de Negocio / Producto Big Data - Telefonica - Bogotá, Cundinamarca   
Manejar la relación comercial local con socios y proveedores, Colaborando con el equipo de alianzas y asociaciones mundiales....
De Telefónica - Fri, 30 Jun 2017 13:16:47 GMT - Ver todos: empleos en Bogotá, Cundinamarca
          COL-ESPECIALISTA DATOS BIG DATA - Telefonica - Bogotá, Cundinamarca   
(Aplica para empleados(as) directos(as) y temporales). ESPECIALISTA DATOS BIG DATA....
De Telefónica - Fri, 30 Jun 2017 13:16:48 GMT - Ver todos: empleos en Bogotá, Cundinamarca
          COL - EJECUTIVO VENTA ESPECIALISTA BIG DATA - Telefonica - Bogotá, Cundinamarca   
(Aplica para empleados(as) directos(as) y temporales). Profesional Desarrollo de Negocio / Producto Big Data....
De Telefónica - Fri, 30 Jun 2017 13:16:47 GMT - Ver todos: empleos en Bogotá, Cundinamarca
          COL - Profesional Desarrollo de Negocio / Producto Big Data - Telefonica - Bogotá, Cundinamarca   
(Aplica para empleados(as) directos(as) y temporales). Profesional Desarrollo de Negocio / Producto Big Data....
De Telefónica - Fri, 30 Jun 2017 13:16:47 GMT - Ver todos: empleos en Bogotá, Cundinamarca
          Home Gadget Geeks: On Cybersecurity and Ransomware - CF036   
itunes pic
Cyber Frontiers is all about Exploring Cyber security, Big Data, and the Technologies Shaping the Future Through an Academic Perspective! Christian Johnson, a student at the University of Maryland will bring fresh and relevant topics to the show based on the current work he does. Please leave a REVIEW (iPhone or iPad) - https://itunes.apple.com/WebObjects/MZStore.woa/wa/viewContentsUserReviews?id=857124890&type=Podcast&ls=1&mt=1 Support the Average Guy Tech Scholarship Fund: https://www.patreon.com/theaverageguy WANT TO SUBSCRIBE? We now have Video Large / Small and Video iTunes options at http://theAverageGuy.tv/subscribe You can contact us via email at jim@theaverageguy.tv Full show notes and video at http://theAverageGuy.tv/cf036
          Brand positioning per PMI: scenari e obiettivi   

Brand positioning per PMI. L’arte di sapersi differenziare e riconoscere come unici sul mercato e nella testa dei consumatori.   Brand positioning per PMI? Alla ricerca degli elementi differenzianti per creare valore al prprio brand. In un sistema sempre più globale, caratterizzato dalla forte spinta del digital darwinism, dominato dai big data e dall’elevata competitività, […]

L'articolo Brand positioning per PMI: scenari e obiettivi sembra essere il primo su B2corporate.


          Software Engineer, Google Cloud Platform   
CA-San Diego, Main Duties and Responsibilities: - Play an integral role in building company cloud computing platform leveraging Google Cloud Platform (GCP). - Take ownership of high-profile GCP software engineering and Big Data initiatives at a publicly-traded, industry-leading company. - Develop and maintain server side code for cloud based applications. Skills and Requirements: - 7 - 10+ years of software eng
          Senior Software Engineer for Big Data & Data Analitycs SEAD-0617-VM - Smartercom Italia srl - Milano Nord, Lombardia   
Smartercom Italia, azienda specializzata in ambito TLC e consulenza ICT, per ampliamento gruppi di lavoro presso propri clienti in Vimercate (Monza Brianza),
Da Indeed - Thu, 29 Jun 2017 11:32:09 GMT - Visualizza tutte le offerte di lavoro a Milano Nord, Lombardia
          Software Engineer Big Data & Data Analitycs - Smartercom Italia srl - Milano, Lombardia   
Smartercom Italia, azienda specializzata in ambito TLC e consulenza ICT, per ampliamento gruppi di lavoro presso propri clienti in Vimercate (Monza Brianza),
Da IProgrammatori.it - Thu, 29 Jun 2017 12:55:53 GMT - Visualizza tutte le offerte di lavoro a Milano, Lombardia
          Vincent Granville posted a blog post   
Vincent Granville posted a blog post

          Software Development Engineer - Huawei Canada - Markham, ON   
Huawei is seeking a Software Development Engineer to join our Big Data Analytics team at the Canada Research Centre in Markham....
From Huawei Canada - Thu, 15 Jun 2017 09:33:55 GMT - View all Markham, ON jobs
          COL-ESPECIALISTA DATOS BIG DATA - Telefonica - Bogotá, Cundinamarca   
(Aplica para empleados(as) directos(as) y temporales). ESPECIALISTA DATOS BIG DATA....
De Telefónica - Fri, 30 Jun 2017 13:16:48 GMT - Ver todos: empleos en Bogotá, Cundinamarca
          COL - EJECUTIVO VENTA ESPECIALISTA BIG DATA - Telefonica - Bogotá, Cundinamarca   
(Aplica para empleados(as) directos(as) y temporales). Profesional Desarrollo de Negocio / Producto Big Data....
De Telefónica - Fri, 30 Jun 2017 13:16:47 GMT - Ver todos: empleos en Bogotá, Cundinamarca
          COL - Profesional Desarrollo de Negocio / Producto Big Data - Telefonica - Bogotá, Cundinamarca   
(Aplica para empleados(as) directos(as) y temporales). Profesional Desarrollo de Negocio / Producto Big Data....
De Telefónica - Fri, 30 Jun 2017 13:16:47 GMT - Ver todos: empleos en Bogotá, Cundinamarca
          (USA-WA-Seattle) Oracle Cloud, Information Delivery - Manager   
Deloitte is one of the leading professional services organizations in the United States, specializing in audit, tax, consulting and financial advisory services with clients in more than 20 industries. We provide powerful business solutions to some of the world s most well-known and respected companies, including more than 75 percent of the Fortune 100.At Deloitte, you can have a rewarding career on every level. In addition to challenging and meaningful work, you ll have the chance to give back to your community, make a positive impact on the environment, participate in a range of diversity and inclusion initiatives, and find the support, coaching, and training it takes to advance your career. Our commitment to individual choice lets you customize aspects of your career path, your educational opportunities and your benefits. And our culture of innovation means your ideas on how to improve our business and your clients will be heard.Work you'll doA Manager at Deloitte will manage and deliver components of client engagements that identify, design, and implement technology and creative business solutions for large companies. Key responsibilities will include:Manage teams in the identification of business requirements, functional design, process design (including scenario design, flow mapping), prototyping, testing, training, defining support procedures.Formulate planning, budgeting, forecasting and reporting strategies.Manage full life cycle implementations.Develop statements of work and/or client proposals.Identify business opportunities to increase usability and profitability of information architecture.Experience with program leadership, governance and change enablement.Develop and manage vendor relationships.Lead workshops for client education.Manage resources and budget on client projects.Assist and drive the team by providing oversight. The TeamAnalytics and Information ManagementAt Deloitte Consulting LLP, our Analytics and InformationManagement team is responsible for transforming numbers and statistics into actionable information for our clients. We help them predict, plan, and adapt their business strategies to meet real-time challenges and confidently help navigate past whatever obstacles come their way. We take a holistic approach to data and offer a broad range of integration and analytics capabilities, including: Information Delivery, Data Management and Architecture, Digital Finance Technology and Advanced Analytics Enablement.Learn more about our Analytics and Information Management practice.. QualificationsRequired:6 years of relevant technology architecture consulting or industry experience in Consulting and the Oracle products.Experience with Oracle Data Integration ideally around on-prem and cloud based data integration strategies, Big Data, Cloud and Business Intelligence products & applications. Experience managing cloud projects.Proficient in one or more Oracle products: Oracle Cloud Services (OCS), Oracle Business Intelligence Technologies and applications (BICS, OTBI, Oracle R, Oracle BI Appliances, Oracle Data Integrator (ODI) or Oracle GoldenGate.Experience with traditional Data Warehousing implementations and OBIEE, OBIA is a plus however must have experience with above referenced technologies and strategies for implementation and design.Provide support on data quality components during the implementation of the solution architecture.Provide technical recommendations for optimized data access and retention for the data warehouse.Provide oversight support to the design, development and QA teams.Define areas of improvement to optimize data flows.Identify strategy for data acquisition and archival.Bachelor s Degree or equivalent professional experience.Willingness for weekly client-based travel, up to 80-100% (Monday Thursday/Friday) Qualified candidates are also required to have at least one full lifecycle project experience in one or more of the following areas:Development of statements of work and/or client proposals.Identify business opportunities to increase usability and profitability of information architecture.Experience with program leadership, governance and change enablement.Develop and manage vendor relationships.Lead workshops for client education.Manage resources and budget on client projects.Ability to scope out the effort and cost for an enterprise reporting solution.Ability to define how an enterprise information system will align with the organization s business and strategic objectives.Ability to manage multiple teams on a data warehousing engagement. Preferred:Strong oral and written communication skills, including presentation skills (MS Visio, MS PowerPoint).Strong problem solving and troubleshooting skills with the ability to exercise mature judgment.Eagerness to mentor junior staff.An advanced degree in the area of specialization is preferred. How you ll growAt Deloitte, our professional development plan focuses on helping people at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there s always room to learn. We offer opportunities to help sharpen skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs at Deloitte University, our professionals have a variety of opportunities to continue to grow throughout their career. Explore Deloitte University, The Leadership Center. BenefitsAt Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits.Learn more about what working at Deloitte can mean for you. Deloitte s cultureOur positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenshipDeloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte s impact on the world. Recruiter tipsWe want job seekers exploring opportunities at Deloitte to feel prepared and confident. To help you with your interview, we suggest that you do your research: know some background about the organization and the business area you re applying to.Check out recruiting tips from Deloitte professionals. About Deloitte As used in this document, Deloitte means Deloitte LLP and its subsidiaries. Please see www.deloitte.com/us/about for a detailed description of the legal structure of Deloitte LLP and its subsidiaries. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law. Disclaimer: If you are not reviewing this job posting on our Careers site (careers.deloitte.com) or one of our approved job boards we cannot guarantee the validity of this posting. For a list of our current postings, please visit us at careers.deloitte.com. Category: Management Consulting
          (USA-WA-Seattle) Oracle Cloud, Information Delivery - Senior Consultant   
Deloitte is one of the leading professional services organizations in the United States, specializing in audit, tax, consulting and financial advisory services with clients in more than 20 industries. We provide powerful business solutions to some of the world s most well-known and respected companies, including more than 75 percent of the Fortune 100.At Deloitte, you can have a rewarding career on every level. In addition to challenging and meaningful work, you ll have the chance to give back to your community, make a positive impact on the environment, participate in a range of diversity and inclusion initiatives, and find the support, coaching, and training it takes to advance your career. Our commitment to individual choice lets you customize aspects of your career path, your educational opportunities and your benefits. And our culture of innovation means your ideas on how to improve our business and your clients will be heard.Work you'll doSenior Consultants work within an engagement team. Key responsibilities will include:Function as integrators between business needs and technology solutions, helping to create technology solutions to meet clients business needs.Identifying business requirements, requirements management, functional design, prototyping, process design (including scenario design, flow mapping), testing, training, defining support procedures and supporting implementations.The TeamAnalytics and Information ManagementAt Deloitte Consulting LLP, our Analytics and InformationManagement team is responsible for transforming numbers and statistics into actionable information for our clients. We help them predict, plan, and adapt their business strategies to meet real-time challenges and confidently help navigate past whatever obstacles come their way. We take a holistic approach to data and offer a broad range of integration and analytics capabilities, including: Information Delivery, Data Management and Architecture, Digital Finance Technology and Advanced Analytics Enablement.Learn more about our Analytics and Information Management practice. Qualifications Required:3 years of relevant technology architecture consulting or industry experience in Consulting with the Oracle products.Experience with Oracle Data Integration ideally around on-prem and cloud based data integration strategies, Big Data, Cloud and Business Intelligence products & applications. Experience managing cloud projects.Proficient in one or more Oracle products: Oracle Analytics Cloud (OAC), Oracle Business Intelligence Technologies and applications (BICS, OTBI, Oracle R, Oracle BI Appliances, Oracle Data Integrator (ODI) or Oracle GoldenGate.Experience providing support on data quality components during the implementation of the solution architecture.Bachelor s Degree or 4 years equivalent professional experienceAbility to travel 80-100% of the time (Monday Thursday/Friday) Preferred:Experience with traditional Data Warehousing implementations and OBIEE, OBIA is a plus however must have experience with above referenced technologies and strategies for implementation and design.Experience with program leadership, governance and change enablement.Lead workshops for client education.Experience providing technical recommendations for optimized data access and retention for the data warehouse.Experience providing oversight support to the design, development and QA teams.Ability to scope out the effort and cost for an enterprise reporting solution.Ability to define how an enterprise information system will align with the organization s business and strategic objectives.Strong oral and written communication skills, including presentation skills (MS Visio, MS PowerPoint).Strong problem solving and troubleshooting skills with the ability to exercise mature judgment.Eagerness to mentor junior staff.An advanced degree in the area of specialization is preferred. How you ll growAt Deloitte, our professional development plan focuses on helping people at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there s always room to learn. We offer opportunities to help sharpen skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs at Deloitte University, our professionals have a variety of opportunities to continue to grow throughout their career. Explore Deloitte University, The Leadership Center. BenefitsAt Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits.Learn more about what working at Deloitte can mean for you. Deloitte s cultureOur positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenshipDeloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte s impact on the world. Recruiter tipsWe want job seekers exploring opportunities at Deloitte to feel prepared and confident. To help you with your interview, we suggest that you do your research: know some background about the organization and the business area you re applying to.Check out recruiting tips from Deloitte professionals. About Deloitte As used in this document, Deloitte means Deloitte LLP and its subsidiaries. Please see www.deloitte.com/us/about for a detailed description of the legal structure of Deloitte LLP and its subsidiaries. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law. Disclaimer: If you are not reviewing this job posting on our Careers site (careers.deloitte.com) or one of our approved job boards we cannot guarantee the validity of this posting. For a list of our current postings, please visit us at careers.deloitte.com. Category: Management Consulting
          (USA-WA-Seattle) Oracle Cloud, Information Delivery - Consultant   
Deloitte is one of the leading professional services organizations in the United States, specializing in audit, tax, consulting and financial advisory services with clients in more than 20 industries. We provide powerful business solutions to some of the world s most well-known and respected companies, including more than 75 percent of the Fortune 100.At Deloitte, you can have a rewarding career on every level. In addition to challenging and meaningful work, you ll have the chance to give back to your community, make a positive impact on the environment, participate in a range of diversity and inclusion initiatives, and find the support, coaching, and training it takes to advance your career. Our commitment to individual choice lets you customize aspects of your career path, your educational opportunities and your benefits. And our culture of innovation means your ideas on how to improve our business and your clients will be heard.Work you'll doConsultants work within an engagement team. Key responsibilities will include:Function as integrators between business needs and technology solutions, helping to create technology solutions to meet clients business needs.Defining systems strategy, developing system requirements, designing, prototyping, and testing custom technology solutions, and supporting system implementation.The TeamAnalytics and Information ManagementAt Deloitte Consulting LLP, our Analytics and InformationManagement team is responsible for transforming numbers and statistics into actionable information for our clients. We help them predict, plan, and adapt their business strategies to meet real-time challenges and confidently help navigate past whatever obstacles come their way. We take a holistic approach to data and offer a broad range of integration and analytics capabilities, including: Information Delivery, Data Management and Architecture, Digital Finance and Technology and Advanced Analytics Enablement.Learn more about our Analytics and Information Management practice. Analytics Information Management Consultant Candidates are required to have:2 years of relevant technology architecture consulting or industry experience in Consulting with the Oracle products.Experience with Oracle Data Integration ideally around on-prem and cloud based data integration strategies, Big Data, Cloud and Business Intelligence products & applications. Proficient in one or more Oracle products: Oracle Analytics Cloud (OAC), Oracle Business Intelligence Technologies and applications (BICS, OTBI, Oracle R, Oracle BI Appliances, Oracle Data Integrator (ODI) or Oracle GoldenGate).Experience providing support on data quality components during the implementation of the solution architecture.Experience providing technical recommendations for optimized data access and retention for the data warehouse.Bachelor s Degree or 4 years equivalent professional experienceAbility to travel 80-100% of the time (Monday Thursday/Friday) Preferred:Experience with traditional Data Warehousing implementations and OBIEE, OBIA is a plus however must have experience with above referenced technologies and strategies for implementation and design.Experience providing oversight support to the design, development and QA teams. Strong oral and written communication skills, including presentation skills (MS Visio, MS PowerPoint).Strong problem solving and troubleshooting skills with the ability to exercise mature judgment.Eagerness to mentor junior staff.An advanced degree in the area of specialization is preferred. How you ll growAt Deloitte, our professional development plan focuses on helping people at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there s always room to learn. We offer opportunities to help sharpen skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs at Deloitte University, our professionals have a variety of opportunities to continue to grow throughout their career. Explore Deloitte University, The Leadership Center. BenefitsAt Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits.Learn more about what working at Deloitte can mean for you. Deloitte s cultureOur positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenshipDeloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte s impact on the world. Recruiter tipsWe want job seekers exploring opportunities at Deloitte to feel prepared and confident. To help you with your interview, we suggest that you do your research: know some background about the organization and the business area you re applying to.Check out recruiting tips from Deloitte professionals. About Deloitte As used in this document, Deloitte means Deloitte LLP and its subsidiaries. Please see www.deloitte.com/us/about for a detailed description of the legal structure of Deloitte LLP and its subsidiaries. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law. Disclaimer: If you are not reviewing this job posting on our Careers site (careers.deloitte.com) or one of our approved job boards we cannot guarantee the validity of this posting. For a list of our current postings, please visit us at careers.deloitte.com. Category: Management Consulting
          (USA-WA-Seattle) Oracle Cloud, Information Delivery - Senior Manager   
Deloitte is one of the leading professional services organizations in the United States, specializing in audit, tax, consulting and financial advisory services with clients in more than 20 industries. We provide powerful business solutions to some of the world s most well-known and respected companies, including more than 75 percent of the Fortune 100.At Deloitte, you can have a rewarding career on every level. In addition to challenging and meaningful work, you ll have the chance to give back to your community, make a positive impact on the environment, participate in a range of diversity and inclusion initiatives, and find the support, coaching, and training it takes to advance your career. Our commitment to individual choice lets you customize aspects of your career path, your educational opportunities and your benefits. And our culture of innovation means your ideas on how to improve our business and your clients will be heard.Work you'll doSenior Managers are expected to contribute to the firm's growth and development in a variety of ways, including:Engagement Management:Lead engagement planning and budgeting; mobilize and manage engagement teams; define deliverable structure and content; facilitate buy-in of proposed solutions from top management levels at the client; direct on-time, quality delivery of work products; manage engagement economics; manage engagement riskClient Management:Manage day to day interactions with executive clients and sponsorsBusiness Development:Develop and maintain contact with top decision makers at key clients; organize and lead pursuit teams; participate and lead aspects of the proposal development process; contribute to the development of proposal pricing strategiesPractice Development & Eminence:Develop practical solutions and methodologies; develop "thoughtware" and "point-of-view" documents; participate in public speaking events; get published in industry periodicalsPeople Development:Perform role of counselor and coach; provide input and guidance into the staffing process; actively participate in staff recruitment and retention activities; provide leadership and support for delivery teams and staff in local offices The TeamAnalytics and Information ManagementAt Deloitte Consulting LLP, our Analytics and InformationManagement team is responsible for transforming numbers and statistics into actionable information for our clients. We help them predict, plan, and adapt their business strategies to meet real-time challenges and confidently help navigate past whatever obstacles come their way. We take a holistic approach to data and offer a broad range of integration and analytics capabilities, including: Information Delivery, Data Management and Architecture, Digital Finance Technology and Advanced Analytics Enablement.Learn more about our Analytics and Information Management practice. QualificationsRequired:8 years of relevant technology, architecture consulting or industry experience.Experience with Oracle Data Integration ideally around on-prem and cloud based data integration strategies, Big Data, Cloud and Business Intelligence products and applications. Experience managing cloud projects.Proficient in one or more Oracle products: Oracle Cloud Services (OCS), Oracle Business Intelligence Technologies and applications (BICS, OTBI, Oracle R, Oracle BI Appliances, Oracle Data Integrator (ODI) or Oracle GoldenGate.Experience with traditional Data Warehousing implementations and OBIEE, OBIA is a plus however must have experience with above referenced technologies and strategies for implementation and design.To conceptualize, design and assist in the implementation of new and existing systems, middleware, data warehouse and production architectures.Identify and promote best practices and patterns for data modeling, and provide oversight for all activities related to data cleansing, data quality and data consolidation using standard data modeling methodologies and processes.Provide support on data quality components during the implementation of the solution architecture. Provide technical recommendations for optimized data access and retention for the data warehouse.Bachelor s Degree or equivalent professional experience.Willingness for weekly client-based travel, up to 80-100% (Monday Thursday/Friday) Qualified candidates are also required to have at least one full lifecycle project experience in one or more of the following areas:Development of statements of work and/or client proposals.Broad system level expertise across multiple computing platforms and technologies with the ability to influence direction around information management at the Enterprise Level.Organize knowledge transfer to clients.Develop and manage vendor relationships.Lead workshops for client education.Identify opportunities to add value to the client s organization by effective use of information management.Build relationships with CXOs and key stakeholders responsible for information and performance management in client s organization.Ability to scope out the effort and cost for an enterprise reporting solution.Ability to define how an enterprise information system will align with the organization s business and strategic objectives.Ability to manage multiple teams on a data warehousing engagement. Preferred:Ability to work independently, manage small engagements or parts of large engagements.Strong oral and written communication skills, including presentation skills (MS Visio, MS PowerPoint).Strong problem solving and troubleshooting skills with the ability to exercise mature judgment.Willingness to mentor junior staff.An advanced degree in the area of specialization is preferred. How you ll growAt Deloitte, our professional development plan focuses on helping people at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there s always room to learn. We offer opportunities to help sharpen skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs at Deloitte University, our professionals have a variety of opportunities to continue to grow throughout their career. Explore Deloitte University, The Leadership Center. BenefitsAt Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits.Learn more about what working at Deloitte can mean for you. Deloitte s cultureOur positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenshipDeloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte s impact on the world. Recruiter tipsWe want job seekers exploring opportunities at Deloitte to feel prepared and confident. To help you with your interview, we suggest that you do your research: know some background about the organization and the business area you re applying to.Check out recruiting tips from Deloitte professionals. About Deloitte As used in this document, Deloitte means Deloitte LLP and its subsidiaries. Please see www.deloitte.com/us/about for a detailed description of the legal structure of Deloitte LLP and its subsidiaries. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law. Disclaimer: If you are not reviewing this job posting on our Careers site (careers.deloitte.com) or one of our approved job boards we cannot guarantee the validity of this posting. For a list of our current postings, please visit us at careers.deloitte.com. Category: Management Consulting
          (USA-WA-Redmond) Principal Software Eng Lead   
Billions of users. Billions of dollars. Millions of products. Planet-wide reach. Sell everything: apps, games, hardware. It’s not Amazon. It’s Microsoft Marketplace Services. If you are excited by working on software at the scale of one of the largest online marketplaces in the world then this is your dream job. The Universal Store team in WDG is developing some of Microsoft’s largest scale and business critical cloud services. These services have a huge global footprint of over 240 markets and process millions of transactions daily, with loads growing linearly as Microsoft moves to a “cloud first”, “mobile first” strategy. The platform powers all of Microsoft’s key services - Windows App Store, Windows Phone, XBOX, Bing Ads, Office 365, Azure to name just a few. Whether renting a movie or buying a game on Xbox LIVE, purchasing an app on a Windows or Windows Phone device, signing up for an Office 365 subscription or paying for Azure services, you are using the Universal Store platform. We are the Marketplace Services team and are responsible for the services which power Microsoft’s marketplace. We specialize in a massive transactional infrastructure that handles the core commerce functionality of Xbox Live, Windows and Phone services. Every time a customer searches, buys, or gets a license for a product in the marketplace, they are using a Marketplace Service. If you have a gamertag, or if you’ve used your Xbox, Windows Store or Windows Phone to purchase a game, a song, a video, a subscription, or an app, then you’re one of our customers. We’re looking for great developers writing excellent software. If you have a history of designing, owning and shipping software, as well as excellent communication and collaboration skills, then we want to talk to you. You should have a solid understanding of the software development cycle, from architecture to testing. You’ll have a passion for quality and be a creative thinker. You’ll write secure, reliable, scalable, and maintainable code, and then effectively debug it, test it and support it live. You should be comfortable owning a feature and making decisions independently. We expect an Engineering Lead on the team to: 1. Be a great manager, mentor, and leader of the team and broader organization. You will manage a team of between 5 and 8 developers. 2. Be a great process engineer. You will be accountable to the design, implementation, schedule, delivery, of your team’s services. Doing this in an efficient way is a must. 3. Earn the technical respect of the people on your team. This is a technical position and the ideal candidate should be capable of working in the code, supporting the service, and understanding at a detailed level how the software works. 4. Be the software architect: we will look to you to have an informed opinion about what and how the software should evolve and have the ability to land your plan in the division. Experience building scalable cloud services, distributed systems, and/or database systems would be a definite plus. Required Qualifications: • 10+ years demonstrated experience in designing and developing enterprise-level internet scale services/solutions. Preferred Qualifications: • Strong design and programming skills in C# or Java. • Experience leading and influencing virtual teams of developers to deliver global, highly available, highly scalable services. • Understanding and experience in Machine Learning a plus. • Expert hands-on software development expertise including object oriented design skills, .NET etc. • Excellent analytical skills • Excellent communication skills, including the ability to write concise and accurate technical documentation, communicate technical ideas to non-technical audiences, and to lead development teams. • Demonstrated ability to impact/influence engineering and project teams • Proven track record with industry level influence of shipping highly-scalable and reliable services/systems • Ability to work independently and in a team setting and be able to research innovative solutions for challenging business/technical problems • Solid technical aptitude and problem solving skills, take initiative, and must be result driven strong debugging skills • Expert understanding of Engineering Excellence processes and requirements • Expertise and knowledge in modern engineering practices (Continuous Integration, TDD, automated deployments with integrated quality gates) • Experience with Cloud Platforms like Windows Azure Platform, Amazon AWS, Google Apps • Experience with Big Data/Analytic Systems like Cosmos or Hadoop implementations • Experience in eCommerce and/or Supply Chain • Master or Bachelor degree in Computer Science, Computer Engineering, or equivalent with related engineering experience is required. Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to askstaff@microsoft.com. Development (engineering)
          (USA-WA-Bellevue) Sr Data Engineer   
Expedia Expedia Media Solutions builds innovative media partnerships for travel advertisers enabling them to use Expedia's network of leading travel brands and global sites. We have revolutionized the way brands reach and connect with online travel consumers, emerging as a leader in online advertising among travel and e-commerce brands. With a growing product portfolio offering a multitude of advertising and sponsorship opportunities, the Media Solutions team at Expedia has built a marketing platform for advertising partners to reach the 78 million worldwide monthly unique visitors like you that frequent Expedia Inc. sites. You will shape the future of the Business Intelligence team as we continue to build critical aggregate datasets on our on premise platform and work towards creating granular datasets on our developing cloud infrastructure that will serve as the foundation for business analysts and data scientists to create meaningful insights. We are seeking a Sr Data Engineer to lead the transition of our nimble team of data warehousing experts from using traditional ETL techniques to more flexible languages and tools that were created to enable advanced analytics on large datasets. **Key Responsibilities:** * You will optimize and automate ingestion processes for a variety of data sources such as: click stream, audience, transactional, and advertising * You will drive data investigations across organizations and deliver resolution of technical, procedural, and/or operational issues to completion * You will build/extend toolsets, create/maintain batch jobs, and create systems documentation * You will guide and mentor other data engineers and influence large scale projects **Minimum Qualifications:** * 7+ years of hands-on experience designing and operating large data platforms * 2+ years of experience with Java, Python or similar programming language * 2+ years leading data warehousing and analytics projects, including using AWS technologies—Redshift, S3, EC2, and other big data technologies * Expert in SQL and experience with the Hadoop ecosystem common and emerging tools (e.g. Hive, Spark, Presto, Qubole) * Experience in implementing SDLC best practices and Agile methods * BS in Computer Science, Mathematics, Statistics, or related field **Preferred Qualification:** * Background in web analytics, online advertising, e-commerce, business measurement or a comparable reporting and analytics role Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. This employer participates in E-Verify. The employer will provide the Social Security Administration (SSA) and, if necessary, the Department of Homeland Security (DHS) with information from each new employee's I-9 to confirm work authorization. * Posted Yesterday * Full time * R-19505
          (USA-WA) Mobile Solutions Architect   
**General Description** Samsung R&D; Lab in Bellevue, WA is looking for talented Wireless Modem Architect Engineer for driving and creating our next generation wireless software and services across devices. Established in 2011 as a Samsung oversea R&D; center, the lab owns advanced technology collaboration with wireless carriers in the broad communication, computing, and entertainment domains covering technologies such as mobile wireless, big data, AR/VR, and AI. The lab is also responsible for Samsung solution and innovation ideation and development, carrier technical requirement analysis, architecture design and feature implementation of software services. We need experienced system staff engineer who is passionate about telecom technology evolution and cellular wireless technologies, as well as creating cutting edge innovations for Samsung’s next generation products and services. A System Staff Engineer will lead the lab’s advanced technology trials and collaborations with US wireless carriers in the area of wireless communication and modem/protocol engineering. The role will be the go-to person for telecom network technology evolution, carrier requirement analysis and solution design with solid hands-on experience and expertise on such technologies as LTE bands, Carrier Aggregation, MIMO, LTE-U, 5G, Enhanced Voice Services, and so on. The role is expected to be able to understand emerging cellular wireless technologies, very familiar with modem software architecture on chipset level, protocol layer engineering, network and device performance, and is able to create and assess architectural design and implementation options, as well as debugging and resolving device issues across PHY, MAC, RRC & RLC layers. The role is expected to provide technical guidance to junior engineers. * As a technical architect, keep up to date with the industry’s wireless technology trend with deep technical insight and hands-on practice, lead Samsung innovation ideation and proof-of-concept for upcoming wireless technologies. * As a protocol SME, collaborate closely with wireless carriers on emerging new communication technologies, technology trials and commercialization execution. * Provide architectural technical guidance to team members to perform deep carrier requirement analysis and software design. Provide implementation and test planning guidance throughout the entire carrier trials and commercialization by leveraging deep expertise in modem software, Android RIL and wireless standards * Evaluate internal engineering process and identify improvement for better software quality and shorter time to market. **Necessary Skills / Attributes** * Industry candidates are required to have at least 8 years of work experience on modem software development. * Minimally Bachelor degree in Electronic Engineering or Computer Science is required. Industry candidate with 6-10 years of experience can also be considered. * Strong cellular systems knowledge in LTE/CA/VoLTE/IMS Core, UMTS & GSM, 3GPP. * Strong technical expertise and experience in wireless Layer 1/2 protocol design, simulation and optimization for wireless standards; strong technical expertise in wireless WLAN and 4G/5G technologies (including spatial channel model, OFDM, MIMO, interference control, random channel access and selection, CSMA-CA) * Evaluate new technologies (LTE-U, LTE Advanced, 5G, WiFi, MIMO, CA, EVS etc.) and their applicability to Samsung devices * Deep understanding of higher layer protocols, such as MAC, RLC, PDCP, IP and TCP and so on. * Strong debugging experience of LTE/CA/VoLTE/IMS Core, UMTS, HSPA and GSM communications systems * Understanding of Physical/MAC layers and Call Processing concepts in one or more of the following air interface standards LTE/LTE-A * Solid understanding and mapping carrier requirements against 3GPP/ETSI, GSMA & RFC standards * Strong sense of project ownership required. Self-motivated and comfortable to learn and solve complicated problems in new technical areas under pressure. * Proficient in Real-time embedded C programming and/or C programming **Company Information** SAMSUNG ELECTRONICS AMERICA BIG THINGS HAPPEN HERE. The amazing products for which Samsung is known world-wide are the results of the amazing people who work here. Their talent, creativity, dedication, and commitment to innovation are what make us who we are. To continue to be a world leader in technology, we focus on attracting the best talent available and offer a corporate culture in which every individual can challenge themselves to discover how good they are, and how great they can become. Headquartered in Ridgefield Park, NJ, and with offices in Richardson, TX and Palo Alto, CA, Samsung Electronics America, Inc. (SEA) is a wholly-owned subsidiary of Samsung Electronics Co. Ltd. and a world leader in technology. We market a broad range of award-winning consumer electronics, smartphones, information systems, and home appliances. Samsung's philosophy is based on our strong determination for growth, perpetual innovation and responsibility to corporate citizenship. As a result of our commitment to innovation and unique design, the Samsung organization is one of the most decorated brands in the electronics industry. Our company is currently ranked #7 in Interbrand’s "100 Best Global Brands," and named #3 on the Boston Consulting Group list as one of the world's most innovative companies in 2014. At Samsung we work hard – every day. It is a fast-paced and challenging work environment, and we are a nimble team that constantly pushes ourselves to be the best. If you have energy, passion, dedication and drive, and you thrive in a fast-paced workplace, the rewards at Samsung are many. Imagine working for a global company that is a world leader in innovation, in an environment where exciting things happen every day. Imagine working with an amazing group of visionaries/ individuals who make products that bring joy to millions of people across the globe every single day. Imagine where you want to be, and who you want to be. At Samsung...the possibilities are limitless. Apply today and find out why LinkedIn ranked us as one of North America’s Most InDemand Employers in 2014. To this end, we follow various protocols during the recruitment process, including but not limited to, avoiding the inadvertent disclosure of confidential information of the applicant’s former employer. Samsung Electronics America provides Equal Employment Opportunity for all individuals regardless of race, color, religion, gender, age, national origin, marital status, sexual orientation, status as a protected veteran, genetic information, status as a qualified individual with a disability or any other characteristic protected by law. *Category:* System Engineering *Full-Time/Part-Time:* Regular Full-Time *Location:* , Bellevue Washington, Bellevue Washington
          (USA-WA) Staff Engineer – Samsung Mobile R&D Lab   
**General Description** Samsung R&D; Lab in Bellevue, WA is looking for talented Mobile Software Engineers for supporting Samsung’s mobile product commercialization and advanced service development with US carriers. Established in June 2011 as a local lab of Samsung R&D;, the lab owns advanced technology collaboration with wireless carriers in the broad communication, computing, and entertainment domains covering technologies such as mobile wireless communication, big data, AR/VR, and AI. The lab is also responsible for carrier technical requirement analysis, architecture design and feature implementation of software services on device and in the cloud. We need software development engineers who are passionate about mobile technologies, software and services in the mobile and cloud space, as well as creating cutting edge innovations for Samsung’s next generation products and services, in the cloud and on devices. A Staff Software Engineer with the lab will play a technical lead role in many aspects of device engineering: requirement analysis, architecture design, implementation and commercialization of specific features on Android and Tizen devices, as well as debugging and resolving device issues across OS layers. The role is expected to provide technical guidance to junior engineers on software architecture, design patterns, engineering best practice, as well as task prioritization and professional communication internally and externally. Job Duties * As a Staff Engineer with full stack development experience, identify and propose innovations and new services, perform deep requirement analysis and software architecture design. Develop both server side and client side architecture and functional specifications by utilizing best design patterns and coding standards. Provide guidance to the team in designing, developing and test planning throughout the entire engineering process with emphasis on design for usability, performance, scalability, testability and code coverage. * Mentor and manage junior engineers on complex issue analysis, architecture, design and development. * Perform root cause analysis of technical issues by leveraging deep expertise in broad mobile embedded system areas such as Android application performance, Android Framework, mobile OS internals, Linux kernel, system battery performance, system stability, and so on. * Evaluate internal engineering process and identify improvement for better software quality and shorter time to market. **Necessary Skills / Attributes** * Industry candidates are required to have at least 6 years of post-bachelor experience on mobile embedded systems (Android preferred) and/or server/services development. * Design, develop, unit test and deploy Android based solutions using common standards and frameworks. * Solid knowledge of Android SDK, understands the fundamentals of what makes good app design and can show examples of this. * Understanding of core Java &/ C++ and OOD. * Excellent knowledge of fundamentals of computer science – operating systems, data structures, algorithms, and TCP/IP networking concept–is mandatory. * Excellent written and verbal communication skills. * BS/MS/Ph.D. degree in Computer Science or related technical field or equivalent practical experience. Preferred Qualifications * Candidates with demonstrable expertise in Android internals or with development experience with phone OEMs will be given preference. * Strong sense of project ownership required. Self-motivated and comfortable to learn and solve complicated problems in new technical areas under pressure. **Company Information** SAMSUNG ELECTRONICS AMERICA BIG THINGS HAPPEN HERE. The amazing products for which Samsung is known world-wide are the results of the amazing people who work here. Their talent, creativity, dedication, and commitment to innovation are what make us who we are. To continue to be a world leader in technology, we focus on attracting the best talent available and offer a corporate culture in which every individual can challenge themselves to discover how good they are, and how great they can become. Headquartered in Ridgefield Park, NJ, and with offices in Richardson, TX and Palo Alto, CA, Samsung Electronics America, Inc. (SEA) is a wholly-owned subsidiary of Samsung Electronics Co. Ltd. and a world leader in technology. We market a broad range of award-winning consumer electronics, smartphones, information systems, and home appliances. Samsung's philosophy is based on our strong determination for growth, perpetual innovation and responsibility to corporate citizenship. As a result of our commitment to innovation and unique design, the Samsung organization is one of the most decorated brands in the electronics industry. Our company is currently ranked #7 in Interbrand’s "100 Best Global Brands," and named #3 on the Boston Consulting Group list as one of the world's most innovative companies in 2014. At Samsung we work hard – every day. It is a fast-paced and challenging work environment, and we are a nimble team that constantly pushes ourselves to be the best. If you have energy, passion, dedication and drive, and you thrive in a fast-paced workplace, the rewards at Samsung are many. Imagine working for a global company that is a world leader in innovation, in an environment where exciting things happen every day. Imagine working with an amazing group of visionaries/ individuals who make products that bring joy to millions of people across the globe every single day. Imagine where you want to be, and who you want to be. At Samsung...the possibilities are limitless. Apply today and find out why LinkedIn ranked us as one of North America’s Most InDemand Employers in 2014. To this end, we follow various protocols during the recruitment process, including but not limited to, avoiding the inadvertent disclosure of confidential information of the applicant’s former employer. Samsung Electronics America provides Equal Employment Opportunity for all individuals regardless of race, color, religion, gender, age, national origin, marital status, sexual orientation, status as a protected veteran, genetic information, status as a qualified individual with a disability or any other characteristic protected by law. *Category:* S/W Engineering *Full-Time/Part-Time:* Regular Full-Time *Location:* , Bellevue Washington, Bellevue Washington, Bellevue Washington, Bellevue Washington
          Big Data Solution Architect - GlaxoSmithKline - Upper Providence, PA   
All qualified applicants will receive equal consideration for employment without regard to race, color, national origin, religion, sex, pregnancy, marital...
From GlaxoSmithKline - Sun, 11 Jun 2017 06:59:54 GMT - View all Upper Providence, PA jobs
          DevOps Engineer - GlaxoSmithKline - Upper Providence, PA   
Big Data Experience: • 4+ years of administration experience on Big Data Hadoop Ecosystem • In-depth knowledge of capacity planning, performance management
From GlaxoSmithKline - Fri, 19 May 2017 18:39:00 GMT - View all Upper Providence, PA jobs
          Fabio Pereira edited Behavioral Diginomics   
book by Fabio Pereira
Options for Sub Titles:
BridgingthegapbetweenBehavioralEconomicsandDigital
WhatMarkZuckerbergcanlearnfromDanAriely
The brain science behind our digital world
The forces behind how we behave and make decisions in a digital world
AppliedlearningsfromNudge,Sway,ClickandPredictablyIrrational totheDigitalRevolution
Questions that the book will answer:
How three simple design principles frombehavioraleconomics helped people eat more healthy food
How a payment web address simple design led 45% more individuals
Moved to pay online
Why the president of the United States orderedexecutive offices and agencies to use behavioral science
What’s different in the digital world,for good and bad,about human behaviour and what we can do about it
What does brain science say about stereotype and how does it impact us when we’re scrolling our facebook timeline
The commercial digital worldarounduswantsourmoney,time,andattention, what can we do about this
How can I be more impactful and influential on digital platforms
How scientists observed 257 thousand people and what they learned about their behavior
How can behavioral science help people make self-beneficial choices and understand the implications of their decisions
How can designers use behavioral science discoveries tocreate digital interfaces
Chapter 1: The human side of digital
In this chapter:
Current state and growth predictions for the digital revolution
Cutting-edge research from the fields of social psychology and behavioral economics which influence the digital world
Introducing the importance of bridging the gap between Behavioral Economics and Digital with real world examples
A sneak peak:
The world has 7-8 billion people and 3 billion of them are internet users. There are 4.5 billion likes on Facebook and 5 million smartphones are sold on average everyday. I speak to my mom on Whatsapp, we're 15 thousand km apart, she lives in Brazil and I live in Australia. My most recent team of 17 people was distributed across 3 countries with 4 different timezones.
We use digital devices to make decisions everyday, what route to take when driving, which restaurant to go, what should we do on the weekend. We have these new tools and devices, yet so much of how we're influenced by them remains our pre-historic wiring, we're all humans after all.Drawing on cutting-edge research from the fields of social psychology, behavioral economics, and organizational behavior, Behavioral Diginomics reveals forces that influence everyone’s behavior...
Chapter 2: Digital nudges that help
In this chapter:
What nudges can be applied to digital channels (email, phone notifications, etc) to help people make better decisions?
Digital nudges in real world examples, scientific experiments and we can learn from them:
A pop-up box which got employees to save paper byprinting double-sided rather than single-sided
A shortened payment web address which led 45% more individuals to pay online
A digitalsignature box at the top ofa form which increased data accuracy
Simple design techniques which promoted workplace healthy snack choices
Pre-populated insurance quotes which allowed users to better choose what was appropriate to their current situation
Chapter 3: The power and responsibilities of defaults
In this chapter:
How defaults can be one of the most powerful nudges and some behavioral science stories around them, for example the fact that only4.25% of Denmark people are organ donors, whereas over 99% of Austrians are due to the difference of opt-in and opt-out.
Real world examples of how websites and other digital channels used defaults. For example Netflix's opt-out special offer which says"Please do not email me Netflix special offers."
Analysis of what different types of digital defaults can and have been used.
Pre-selected checkboxes
Opt-out checkboxes, which require an action to not participate
Sorted dropdown values "most selected" on top
Search results order which completely
Responsibilities of defaults:
The importance of having protection law against certain types of defaults, for instance automatically selling product B when you are buying product A.
Data privacy and what is behind a simple "Accept Terms and Conditions" click.
Chapter 4: User research is more than just surveys
In this chapter:
How behavioral economists found out that only asking how people would behave was not enough
Stories from the digital world where only asking questions was not enough:
184 reports that people said were needed. Only 6 fulfils 95% of all the use cases
In 2010 the media and users said they would "share", but now the "sharing economy" is dead.
Having users sign up for an digital service does not mean they will be active. For examplewhile running through itsseed funding a company had42,000 people signed up, though fewer than 10,000 were active. Signing up does not necessarily mean being an active user.
What else beyond surveys can be used to understand human behavior?
User research - watch users in action
Digital controlled experiments - run A/B tests with users and learn from that
"Go to the library" - use existing knowledge as a base building block and learn on top of it
A sneak peak:
“If I’d asked my customers what they wanted, they’d have asked for faster horses” -Henry Ford knew that asking a question was not enough.Every behavioral economist in the world also knows that. Not because they "guess", but because they setup experiments and ask people how they believe they would behave in certain situations, then collect the answers. However, when they expose those same individuals to those same situations, their observed behavior is extremely differently from what they had predicted for themselves.
Digital surveys are a powerful tool to seek understanding of human behavior. Almost everyday I receive a few of them in my email inbox. I'm a consultant, so part of my job is to visit different clients and help them solve their organisational problems. One of my clients had a system which generated 184 reports. This system had been developed using 15 year old technology, it was hard to maintain, there were not many employees around who knew how it worked anymore. The client decided to migrate the system to a new technology. There's usually a huge cost involved when re-writing these systems in new technologies, so prioritising which features are the most important ones saves these clients tons of money. My client's challenge at that time was to understand out of the 184 reports, which ones were the most used ones. So they decided to ask people through a survey. The results showed that almost all of the reports were needed, people were afraid of losing them so they said "Yes, I need this report, don't take it away from me", not in these exact same words, though. Someone who completely believed that people can predict their behavior and that they tell you the exact truth of the facts would have spent millions of dollars migrating all the 184 reports, however, this client decided to observe the repot usage. The result, not to my surprise, was that out of 184 reports, only 6 of them solved 95% of all the users' needs, only 6. Through observation this client saved millions of dollars because they knew exactly what were the most important features of their system. They knew exactly where to spend their money. Through observation, not only by asking questions.
Chapter 5: Humans who sell, buy and become loyal
In this chapter:
Digital Retail: Which discoveries from behavioral economics can be applied to digital retail.
You touch, you own it - A recent study shows that consumers who touch a product on a touchscreen device are more likely to buy it than if they are using a desktop computer with a mouse. Physical touch and touchscreen touch bring endowment effect. Another experiment shows that washing your hands decreases endowment.
Endowment and Customer Loyalty - Stories of companies which innovate using social science discoveries. For example, Loyal3, coined as the Uber of Wall Street, is a company which uses endowment effect to increase customer loyalty by giving customers and employees shares, which transforms them into passionate fans, a much more meaningful and valuable relationship to have with customers.
Chapter 6: Scrolling, Fast and Slow
In this chapter:
Introduction to dual process theory, our 2 brains - fast and slow, automatic and reflective
Attentional bias and the amount of information available to us online. Attention is the digital currency.
Attention Capital
how paying attention to something is getting harder the more information we have
how digital companies fight for user attention and now there's a price put on someone's attention, for example on YouTube ads before videos
63% of people say that Facebook is source of news about events and issues outside the realm of friends and family. At the same time there are1500 posts thatare eligible to appear in a Facebook user’s feed each day. This forces users to make fast decisions using their automatic brain, which is not accurate and may result on misconceptions and stereotypes of what information is being consumed.
"Should I like or should I hate this?"
"Should I comment on this post?"
"Should I connect with this person?"
Chapter 7: The Social Science of Agile Teams
In this chapter:
"The Best-Kept Management Secret On The Planet: Agile" (Forbes)
What is Agile?
Why Agile works and what social science is behind its practices
Team goals, deadlines and student syndrome
Motivation and communication
Ownership and endowment effect
Distributed teams. What are the human implications for teams that don't work face to face
Chapter objective:
People behavior is one of the centerpieces of Agile. Cognitive Psychology and Behavioral Economics have helped us achieve a better understanding of some human seemingly idiosyncratic behaviors, for example, the decoy effect on the decision-making process. Agile teams are constantly making decisions, for instance, while prioritising, estimating stories or choosing the size of an iteration.This chapter will compare behavioral economics controlled experiments to Agile. The anticipated result should be an increased awareness of the reasoning behind some Agile values, principles and practices which, as a consequence, should improve the way we apply them as agile adopters and practitioners.
Chapter 8: Big Data, Big Experiments, Big Learnings
In this chapter:
Initial overview of how usually social experiments in labs work:
usually tens or hundreds of participants are involved
process: recruitment,preparation, experimentation and results analysis.
The story of a studywhich used big data to observe 257,000 passengers on1,966 flights and how all data allowed a researcher to simulate thousands of studies.
Chapter 9: To Nudge or not to Nudge
In this chapter:
The final chapter will be an analysis on the legalities and moral values behind using this knowledge. Observing the world from different perspectives:
Digital companies who want to nudge customers
Government which might nudge to better serve their people
Users who should be aware of the nudges and biases to make better decisions
google docs

          Big Data Infrastructure Industry Worldwide Growth and Development   
(EMAILWIRE.COM, July 01, 2017 ) In this report, we analyze the Big Data Infrastructure industry from two aspects. One part is about its production and the other part is about its consumption. In terms of its production, we analyze the production, revenue, gross margin of its main manufacturers and...
          (USA-AR-Little Rock) Regional Sales Manager   
**About Us:** GE is the world's Digital Industrial Company, transforming industry with software-defined machines and solutions that are connected, responsive and predictive. Through our people, leadership development, services, technology and scale, GE delivers better outcomes for global customers by speaking the language of industry. GE offers a great work environment, professional development, challenging careers, and competitive compensation. GE is an Equal Opportunity Employer at http://www.ge.com/sites/default/files/15-000845%20EEO%20combined.pdf . Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law. **Role Summary:** Job Overview: In this strategic Sales position, the Regional Sales Manager – Digital Sales Direct will be responsible for meeting and exceeding an orders operating plan in the west region across a range of power utilities and industry customers, including identifying sales opportunities within existing and new accounts, prospecting, strategy planning, executive relationship development, and deal closures. **Essential Responsibilities:** In this role, you will: + Develop a solution selling framework for GE business — tailor deal lifecycle, enablement deliverables, required resources, technology support, maturity model, sales methodology etc. + Establish a deep understanding of customers’ business needs by creating value to customers for our solution footprint. + Add value to the customer’s business and maintain a goal oriented approach to the business partnership. + Demonstrate to customers how they benefit by partnering with GE and how our solutions deliver results. + Aggressively develop and drive a sustainable commercial and solution strategy across multiple customer divisions that is aligned to the agreed account goals. + Develop and execute an Account Playbook that formalizes the “go high / go low” strategy for the Enterprise account. Where applicable, develop a joint Governance process with executive sponsorship that aligns along the following pillars – Commercial, Product/Technology, Implementation and Support. + Leverage the “Big GE” by coordinating across multiple GE divisions to solve customer challenges and enhance value and loyalty through the introduction of GE Corporate programs. (e.g. Industrial Internet, Minds + Machines) + Analyzes sales pipeline and maintains an array of opportunities to ensure that sales goals are achieved. + Actively grow and maintain a multi-year account plan that will be shared globally with parts of our business including Marketing, Product Management, Sales, Professional Services, and the Development teams to ensure coordination across the business. + Ensuring a Professional Sales Experience for customers during all aspects of sales process and touch points including: Formal meeting agendas, formal follow-up stating sequence of events and next steps in writing, and issue resolution in a timely fashion. + Formulates the winning proposals based on a cohesive strategy that leverages deep knowledge of industry, customer and GE product. **Qualifications/Requirements:** Basic Qualifications: + Bachelor’s Degree in business, marketing or related discipline + 12+ years of software industry experience minimum with proven track recordEligibility Requirements: + Legal authorization to work in the U.S. is required. We will not sponsor individuals for employment visas, now or in the future, for this job. + Any offer of employment is conditioned upon the successful completion of a background investigation and drug screen + Must be able to travel 50-70% of the time **Desired Characteristics:** + Works with individuals across the GE businesses on how to use Big Data to collect and analyze market information as well as how to present analysis and recommendations to drive strategic commercial decisions + Proactive in seeking out new digital platforms to drive deeper connections with customers – such as heat mapping existing relationships on LinkedIn to identify new sales opportunities, active in industry groups/blogs to gain exposure to target audiences and viewed as a domain expert + Develops acceptable strategies to mitigate risks triggered in RFP's and/or customers' T&Cs while meeting GE business objectives. + Leads the implementation of economic value selling throughout customer organization. + Thoroughly analyzes data to identify trends and issues that translate into a plan for the customer with some connection to seemingly independent problems. + Develops acceptable mitigation strategies that consider T&Cs of customers, competition and partners and key differentiators while also meeting business objectives. + Identifies and prioritizes critical GE resources needed to further the sales effort, negotiating with stakeholders for utilization.Technical Expertise: + Establishes trust and empathy as an advisor to the client; Works collaboratively in pursuit of discovery to define a desired business outcome while also uncovering unknown business outcomes the client has not previously considered; Ensures that a plan is laid out to accomplish all outcomes. + Proactively identifies pipeline risks and develops mitigation plans; Proactively shares 'best practices' to improve pipeline efficiency; Helps to develop sales team relationships with key contacts. + Able to take products, services, solutions knowledge and connect them to customers’ objectives to develop differentiated opportunities for GE; Draws upon non-traditional solutions; Constantly thinks out of the box & outside domain of expertise to develop creative solutions that meet ongoing customer needs.Business Acumen: + Leads the implementation of economic value selling throughout customer organization; Offers assistance and input to others across GE on this topic. + Communicates vertical expertise or future trends within the power utility and industries that drives certain benefits/challenges; Is seen as part of the customer’s team rather than outsider; Utilizes this dialogue to narrow and refine the customers’ objectives or “top-of-mind” thoughts in order to start a joint-brainstorming of potential solutions or to identify industry benchmarks. + Viewed as a thought leader by strategic customers, invited to advise customers based on GE solution knowledge and industry expertise; Brokers introductions and relationship handoffs with customer C-Suite to other GE team members. + Able to use a variety of financial data in building a broad perspective of company and customer business within their respective industry and markets + Understands the financial implication of different value drivers and creatively applies that understanding to impact company/customer metrics, i.e. revenue, profitability, market share. etc.Leadership: + Establishes & communicates team members' roles in relation to their function and data; Shares knowledge, power and credit, establishing trust, credibility, and goodwill; Coordinates role responsibilities with that of others to achieve mutual goals; Encourages groups to work together to efficiently resolve problems. + Able to consistently lead the process to develop winnable strategies; Creatively uses resources to anticipate and solve problems, resulting in innovative solutions that result in customer and GE satisfaction, and finds alternatives beyond the obvious; Keeps a broad perspective on the customer relationship and potential opportunities to increase customer loyalty.\#DTR **Locations:** United States; California, Oregon, Washington; Redmond, San Ramon, West Coast CitiesGE offers a great work environment, professional development, challenging careers, and competitive compensation. GE is an Equal Opportunity Employer at http://www1.eeoc.gov/employers/upload/eeoc_self_print_poster.pdf . Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law.GE will only employ those who are legally authorized to work in the United States for this opening. Any offer of employment is conditional upon the successful completion​ of a background investigation and drug screen.
          Nove percorsi di valore per Computer Gross   

A nove mesi dal lancio di “Mission Value”, il roadshow in occasione del quale Computer Gross ha identificato nove aree sulle quali tener desta l’attenzione sul valore, il VAD di Empoli ha chiaro in mente il percorso di crescita per i propri rivenditori, che dai prodotti devono passare alle soluzioni, dalle soluzioni ai processi.

Computer Gross lo ha già fatto, non creando solo una serie di business unit per focalizzarsi su cyber security, datacenter, analytics, iperconvergenza, Big data, mobility, digital transformation, cognitive e cloud automation, bensì addestrando vere e proprie squadre di professionisti in grado di affiancare i rivenditori nello sviluppo di progetti innovativi per i propri clienti.

Una prova generale, in tal senso, c’era già stata un anno esatto fa, quando il distributore toscano ha annunciato la nascita di una business unit dedicata alla videosorveglianza che, dopo i brand di Hikvision, Honeywell Security, Samsung Techwin e Bosch, ha visto nel giro di poche settimane aggiungersi a listino anche i sistemi di monitoraggio visivo e i software di gestione video in rete di Avigilon e le funzioni di sorveglianza della piattaforma aperta XProtect di Milestone Systems.

Nella medesima direzione va interpretata l’apertura del marketplace di Arcipelago, business unit che aggrega al proprio interno le soluzioni cloud dei principali vendor presenti nel portafoglio di Computer Gross, e che il Vad ha reso fin dalla fine dello scorso anno disponibile sotto forma di servizi pay-per-use che i rivenditori possono acquistare per i propri clienti ricevendo a fine mese un’unica fattura con il dettaglio dei singoli acquisti suddivisi per cliente e quantità.

Il tutto corredato e supportato dai servizi dalle divisioni a valore del distributore, che nei primi mesi di quest’anno è anche entrato nel capitale azionario di Attiva, distributore italiano di riferimento per il mondo Apple con l’obiettivo di creare sinergie operative e commerciali che facciano leva sulla complementarietà delle due aziende.

Non nel segmento consumer, bensì, in ambito enterprise, grazie alle partnership siglate a livello internazionale con Ibm e Sap, tra gli indiscussi vendor di riferimento nel portafoglio di Computer Gross.

Tra insight, Big data e analytics

Tra gli accordi a valore portati a casa in questo mese di giugno, anche quello per la visual analytics di Tableau, un tool in grado di lavorare su fonti di dati eterogenee per implementare all’interno di aziende di ogni dimensione i concetti della modern Business intelligence, e per le soluzioni KNOX Mobile Security di Samsung, utili a fornire sui propri dispositivi mobili un ambiente separato e sicuro per le email e le app aziendali rispetto alle applicazioni personali utilizzate tutti i giorni.

Un ulteriore tassello strategico, quello portato a casa con il produttore coreano, considerato che il VAD toscano è, al momento, l’unico in Italia ad aver sviluppato, proprio grazie alla partnership con Samsung, la procedura per l’attivazione del servizio Knox Mobile Enrollment, che consente l’implementazione di migliaia di dispositivi in modo semplice e rapido.

L'articolo Nove percorsi di valore per Computer Gross è un contenuto originale di 01net.


          Barilla attua la social collaboration di stabilimento   

Alessandra Ardrizzoia responsabile del digital engagement in Barilla, ha spiegato alla platea del Google Next di Milano (1500 persone) come ha introdotto la social collaboration non tanto negli uffici, ma sulla linea di produzione.

L'offerta Google Cloud in Italia è sotto la responsabilità di Nicola Buonanno.
Fatta da platform, suite e mondo android, richiama 13mila partner. Poggia su un network di data center. Si tratta di più di cento point of presence, fibra, centinaia di Google Global Cache edge model, 9 region attive, 8 ne apriranno di cui in Europa una a Londra, Francoforte e in Finlandia).
L'infrastruttura usata dagli utenti è la stessa che usa Google, dalla sicurezza con cifratura alla Data Loss prevention disponibile come servizio. Fra gli ultimi servizi rilasciati, a maggio, Cloud Spanner è un database relazionale Sql che scala come un noSql fino a migliaia di nodi. Effettua replica sincrona automatica fra le varie Region. Lo usa Adwords come motore, è ideale per i ticket. Big Query crea un datawarehouse aziendale nel cloud, con analytics in tempo reale.

Per contestualizzare la sua azione ha premesso che in Barilla il digitale non è un cliché, ma è una sostanza strategica che risponde a sei filoni (li chiama "stream"): le azioni digitali per il mondo consumer, shopper e customer; la digital transformation globalmente intesa; la IT Governance 2.0 con attenzione per la creazione di business architecture; i meccanismi di innovazione; la digitalizzazione delle persone; la nuova enterprise intelligence, quella che si fa con i big data.

Lezioni digitali

Un percorso impostato da anni, e che ha già prodotto risultati. Ardizzoia ha parlato di tre lezioni imparate da Barilla: la trasformazione digitale non è un banale efficientamento di processi core, ma è la trasformazione di modello di business che influisce sulla proposta di un lifestyle.
Seconda lezione: per avere successo il digitale non va visto come un extra, ma deve poter cambiare l'organizzazione basandola sui dati. Infine: serve il coinvolgimento attivo delle persone.

Social collaboration di stabilimento

In questo ambito Ardizzoia colloca un caso emblematico, CollaborAction, un processo di collaborazione social nei plant basato su tecnologia Google.
Tutto è nato su un bisogno dell'allora direttore dello stabilimento di Cremona, Cinzia Bassi, (ora è nel plant di Castiglion delle Stiviere), relativamente al passaggio di consegne sulle linee produttive.

Il sistema in essere da sempre, basato su un metodo cartaceo, stava mostrando la corda. Con un'iniziativa dal basso è stato tentato un sistema di comunicazione fra operatori basato su chat di Whatsapp. Ma, ovviamente, l'iniziativa non poteva essere istituzionalizzata, senza l'intervento della gestione aziendale.

Quindi Barilla ha sviluppato una app di social collaboration, e l'ha caricata sui tablet e gli smartphone degli operatori che sono in linea di produzione.

Gli operatori di linea la usano come meccanismo di chat per comunicare. fFanno video, foto, testimoniano eventi e li mettono e disposizione di tutti.

Tutti vedono tutto quello che accade e sono facilitati nella presa in carico dei compiti a inizio turno.

Usata inizialmente su due linee di produzione, la app è stata scalata su tutto lo stabilimento e viene ora portata in sei plant in Italia. Nel 2018 la app coprirà 18 plant, fra Italia ed estero, per complessive 1.800 persone coinvolte.

In questo modo Barilla fa capitalizzazione della conoscenza aziendale, utilizzando un linguaggio conversazionale.

L'articolo Barilla attua la social collaboration di stabilimento è un contenuto originale di 01net.


          (USA-AR-Little Rock) Team Leader Customer Forecasting & Replenishment-Walmart/ Sams Club   
Team Leader Customer Forecasting & Replenishment-Walmart/ Sams Club + Requisition ID:WD122329 + Position:Full time + Open date:Jun 28, 2017 5:14 PM + Functional area:Supply Chain & Logistics + Location: Bentonville, Arkansas + Required degrees:Bachelors + Experience required:5 years + Relocation:No Email a friend Basic qualifications:• Bachelors Degree - Business Management, Supply Chain minimum 5 years of Customer Supply Chain or Logistics leadership experience for a mass merchant/FMCG retailer. • Knowledge of supply chain processes that include order management, outbound warehouse activities, freight movement and customer receiving practices. • Strong knowledge base in replenishment, forecasting and inventory management concepts, direct experience preferred. • Excellent verbal and written communication skills • Strong analytical skills (trend analysis, cost benefit analysis) • Knowledge of budget and expense control and general accounting principles. • Proficient use of Order Processing and Replenishment systems and reporting. Preferred qualifications:• Masters of Business - Business Management, Supply Chain Details:The Team Leader Customer Supply for Walmart, is based at Walmart headquarters in Bentonville AR.. As Team Leader, you will be directly responsible for building relationships and partnering with key customer associates at all levels to effectively manage CPFR ( Collaboration, Planning, Forecasting & Replenishment ) inventory management and logistics activities. You will negotiate and influence the GSK and Walmart's blended strategic requirements to meet both companies’ goals. Key Responsibilities: • Understand the current customer supply chain strategy and business model through collaborative integration and analysis of Big Data (down to store level segmentation) to determine opportunities and solutions. • Lead team in implementation of agreed to strategies that ensure consistent execution of supply chain, logistics, forecasting, replenishment, and inventory strategies across the total customer supply chain. • Champion the GSK supply chain to deliver customer scorecard and critical GSK KPI results via agreements gained at annual top-to-top meeting between Supply and Commercial leaders from both the customer and GSK. • Drive further improvements and efficiencies within GSK’s Supply Chain by leading or participating in initiatives using Accelerated Delivery and Performance (ADP). • Own Strategy Development and Communication of Inventory Allocation Process when weeks of supply indicate service issues to the customer. • Coordinate and communicate CPFR / Customer Service shipment projections to Demand Planning and Finance Leaders for the purpose of delivering GSK’s financial goals on a Quarterly and Year End basis. • Communicate all Customer supply chain initiatives to GSK leaders in Supply Chain Logistics and Sales. • Support and facilitate personnel development to meet career aspirations; including coaching, PDP discussions, and talent reviews. Determine salary and bonus levels through review process. Hire and promote highly talented associates and terminate as needed. • Manage New Product Development launch activity for Customer Service • Convey customer inventory strategy and levels to Demand Planning as input to company forecasting process. Primary leadership competencies must include: • Champion Change • Innovative • Manage Execution • Drive Results • Focus on Customer Needs • Use Sound Judgment • Foster Enthusiasm and Teamwork • Negotiation skills *LI-GSK Contact information: You may apply for this position online by selecting the Apply now button. If you require an accommodation or other assistance to apply for a job at GSK, please contact the GSK HR Service Centre at 1-877-694-7547 (US Toll Free) or +1 801 567 5155 (outside US). GSK is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive equal consideration for employment without regard to race, color, national origin, religion, sex, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class. Important notice to Employment businesses/ AgenciesGSK does not accept referrals from employment businesses and/or employment agencies in respect of the vacancies posted on this site. All employment businesses/agencies are required to contact GSK's commercial and general procurement/human resources department to obtain prior written authorization before referring any candidates to GSK. The obtaining of prior written authorization is a condition precedent to any agreement (verbal or written) between the employment business/ agency and GSK. In the absence of such written authorization being obtained any actions undertaken by the employment business/agency shall be deemed to have been performed without the consent or contractual agreement of GSK. GSK shall therefore not be liable for any fees arising from such actions or any fees arising from any referrals by employment businesses/agencies in respect of the vacancies posted on this site. Please note that if you are a US Licensed Healthcare Professional or Healthcare Professional as defined by the laws of the state issuing your license, GSK may be required to capture and report expenses GSK incurs, on your behalf, in the event you are afforded an interview for employment. This capture of applicable transfers of value is necessary to ensure GSK’s compliance to all federal and state US Transparency requirements. For more information, please visit GSK’s Transparency ReportingFor the Recordsite.
          (USA-AR-Little Rock) Manager/Sr Manager – Partner Reporting & Analytics   
Overview of the Department: Consumer Products and Services is one of the largest growth engines of American Express and the Co-Brand Partnerships team leads card partnerships as a source of top line growth, scale and strength for the Blue Box. The Portfolio Insights team provides various business solutions for the Co-Brand Partnership team including: reporting & analytics, competitive intelligence, investment management, goal setting, complaints tracking and servicing. This integral role will be a part of the reporting and analysis functions for the Co-Brand Partnership organization. The ideal candidate will have a unique blend of technical skills, curiosity for insights, ability to synthesize large amounts of data and marketing fluency. This role will provide the opportunity to work directly with senior leaders and cross functional partners, develop a broad perspective of Co-Brand Partnerships, and leverage data in order to drive business growth. The role is a great opportunity for someone with marketing analytics experience looking to apply their skills in a fast-paced environment on high impact initiatives. Job Responsibilities: * Consult with team to understand data requirements to create ad-hoc, custom reports that support product specific initiatives across the Co-Brand Partnerships team (with a focus on the Hilton portfolio) * Develop advanced understanding of CPS data sources and potential usage scenarios * Translate complex, analytical output into easily digested insights across portfolios and identify areas of opportunity * Provide robust analytics to drive the strategy behind transformational initiatives * Incorporate/leverage new data sources to expand data/analytics * Leverage complex data sources to extract meaningful data and perform required testing, quality control, and testing of data before arriving at final data set for team * Partner closely with multiple organizations including Marketing, Finance and RIM * Manage projects with autonomy Employment eligibility to work with American Express in the U.S. is required as the company will not pursue visa * 4 - 10 years in deep analytics and presenting research/insights/findings to stakeholders * Strong quantitative skills and experience working with large amounts of data via SAS/SQL/Teradata programming and/or Big Data tools (i.e. IDN, Cornerstone, CSBS Datamart) * Programming background, software development, coding proficiency * Strong quantitative skills and proven track record delivering business insights through analytics * Strategic thinking and problem-solving skills * Experience in marketing analytics * Ability to operate independently and make critical decisions regarding high visibility deliverables with tight timelines * High quality written and oral communication skills with proven success interpreting business needs and driving results * Internal drive to challenge and improve upon the status quo * Demonstrated ability to handle multiple requests and prioritize accordingly * Excellent relationship building skills – enjoys working as part of a larger team and fostering relationships. * Strong communication skills – equally capable of dealing with a partner, peer, direct report, or senior leader * Advanced degree in statistics, mathematics, operations research, economics, engineering or equivalent Educational requirement: * Degree in Statistics, Engineering or Advanced Analytics preferred **Job** *Marketing* **Title:** *Manager/Sr Manager – Partner Reporting & Analytics* **Location:** *AL-Montgomery* **Requisition ID:** *17007924*
          Big Data Solution Architect - GlaxoSmithKline - Upper Providence, PA   
Linux file system management. Experience designing and building complex, high performance platforms to support Big Data ingestion, curation and analytics....
From GlaxoSmithKline - Sun, 11 Jun 2017 06:59:54 GMT - View all Upper Providence, PA jobs
          Comment on Friday Philosophy – “Technical Debt” is a Poor Term. Try “Technical Burden”? by jgarry   
Funny you mention the 10 or 15 or 20 years... I was called into this place 16 years ago to fix performance problems due to the Rapid Application Development for customization, as well as some business design decisions that were (and remain) questionable. I'm still fixing things no one has noticed has been wrong this whole time... Replacement is in the works, but that has been for a while too. It's all kept a mystery to me, though I have noticed some very bad data suck attempts that are right in my ballpark. We'll see. Someone (with a Masters in Biology) asked for job advice on another forum, asking about how to become a system administrator. He mentioned he had done some scripting in Excel. A local blowhard totally denigrated any scripting language saying " end user experience can't be counted towards IT any more than commuting every day qualifies you as a mechanic" and in response to someone taking issue that scripting is more than end user stuff "Scripting is just normal end user stuff. It's just basic computer literacy." I suggested that programming is the intersection between what he knows and computers, and to google scripting job biology, since that pops up lots of python jobs. And I held back going full flame about what big data work is. The bright side is, the more burden, the more job security. Though that may not be fun, and boy, are we gonna see some large scale damage in the meantime.
          The Rise of Artificial Intelligence in Events [Webinar]   

Join us for this free webinar to learn how to get more from artificial intelligence for your event. Chatbots, Deep Learning, Concierge, Big Data. What all these words hide is an incredible opportunity for event professionals willing to embrace innovation. Artificial Intelligence offers revolutionary opportunities to grow your event. Whether it is customer support, event management, marketing, […]

The post The Rise of Artificial Intelligence in Events [Webinar] by Julius Solaris appeared first on http://www.EventManagerBlog.com


          GEPIA: a web server for cancer and normal gene expression profiling and interactive analyses   
Abstract
Tremendous amount of RNA sequencing data have been produced by large consortium projects such as TCGA and GTEx, creating new opportunities for data mining and deeper understanding of gene functions. While certain existing web servers are valuable and widely used, many expression analysis functions needed by experimental biologists are still not adequately addressed by these tools. We introduce GEPIA (Gene Expression Profiling Interactive Analysis), a web-based tool to deliver fast and customizable functionalities based on TCGA and GTEx data. GEPIA provides key interactive and customizable functions including differential expression analysis, profiling plotting, correlation analysis, patient survival analysis, similar gene detection and dimensionality reduction analysis. The comprehensive expression analyses with simple clicking through GEPIA greatly facilitate data mining in wide research areas, scientific discussion and the therapeutic discovery process. GEPIA fills in the gap between cancer genomics big data and the delivery of integrated information to end users, thus helping unleash the value of the current data resources. GEPIA is available at http://gepia.cancer-pku.cn/.

          Web-Scale Converged Infrastructure Designed to Simplify IT   
Download this resource to key into the specs of one hyper-converged infrastructure, and discover why its implementation is ideal for supporting multiple, virtual workloads such as VDI, private cloud, database, OLTP and data warehouse as well as virtualized big data deployments. Published by: Dell EMC and Nutanix
          1310ss Big Data Solution Architect   

          Discrimination, Lack of Diversity, and Societal Risks of Large-Scale Data Mining Highlighted in Special Issue of Big Data   
none
          Business Analyst - AIG (Olathe, KS)   
Position Description: • Strong communication skills (written and verbal) to deliver insights from quantitative analyses to technical and non-technical audiences. • Stay current on the latest analytic techniques and big data trends. • Work with the business to identify problem areas and collaborate with IT teams to implement analytic solutions.
          SR HADOOP/BIG DATA ANALYST   
PA-Pittsburgh, Lead implementations for enhancements/projects and ability to communicate clearly issues, risks and status updates to developed plans and dashboards…Pyramid Consulting Group has a Pittsburgh client with an immediate need for an experienced SR HADOOP/BIG DATA ANALYST to work as a data and technical business analyst on the Asset and Liability Management (ALM) platform for Corporate Treasury. The can
          Big Data Gone Bad for Fighting Crime   
Source: https://www.bja.gov/JusticeToday/images/SmartPolicingLogo.jpg

Nearly two years ago, we featured a story about a software system called PredPol, a data analytics tool that promised to deliver lower crime rates through algorithms designed to recognize patterns behind a series of crimes to better anticipate how to position police officers to intervene in the activities of criminals.

The results from the pilot studies for the predictive software were promising, enough so that it received a wider test in real-life policing in being applied to more regions within the cities where it was being tested. In October 2016, Kristian Lum and William Isaac of the Human Rights Data Analysis Group released the results of their study into how well PredPol was performing in reducing crime, but what they found gives cause for concern about the data that is being used to direct police activities in the U.S. cities that have adopted the data analytics approach to fighting crime.

Lum and Isaac discovered that the software had a real shortfall because of biases in the police records that were being fed into it.

While police data often are described as representing "crime," that's not quite accurate. Crime itself is a largely hidden social phenomenon that happens anywhere a person violates a law. What are called "crime data" usually tabulate specific events that aren't necessarily lawbreaking – like a 911 call – or that are influenced by existing police priorities, like arrests of people suspected of particular types of crime, or reports of incidents seen when >patrolling a particular neighborhood.

Neighborhoods with lots of police calls aren't necessarily the same places the most crime is happening. They are, rather, where the most police attention is – though where that attention focuses can often be >biased by gender and racial factors.

Focusing on Oakland, California, Lum and Isaac tested the bias of PredPol using race and income level as observable characteristics with crimes involving illegal drug use, the incidence of which studies have indicated are relatively uniform across the racial and income demographics of Oakland's population.

But you wouldn't know that from the results of the PredPol's predictive software using Oakland's police data.

Our recent study... found that predictive policing vendor PredPol's purportedly race-neutral algorithm targeted black neighborhoods at roughly twice the rate of white neighborhoods when trained on historical drug crime data from Oakland, California. We found similar results when analyzing the data by income group, with low-income communities targeted at disproportionately higher rates compared to high-income neighborhoods.

The reason for that turned out to have been directly embedded in the data the race-neutral PredPol software used to direct police activities. Because the data reflected the increased level of law enforcement activities that already existed in the city's primarily black and low income neighborhoods, the software directed police to increase their intervention efforts in the areas where they were already disproportionately focusing their attention.

That increased attention would then be reinforced in an adverse feedback loop, as the police records being generated from their new increased activity would tend to amplify the already disproportionate law enforcement activities in these areas, which would have consequences for a police department already accused of practicing racial profiling in law enforcement.

The software-directed adverse feedback loop would also open the city up to the "squeezing balloon" problem we noted in our previous coverage, where increasing police pressure in one area would result in increases in the incidence of crime in other areas, which would now be more likely to escape both detection and intervention.

What the results indicate is that using data analytics to effectively reduce crime is more complicated than simply factoring racial or income factors out of the software packages used to maximize the return on police investment. They can be useful, but the GIGO principle definitely applies.


          Big Data Solution Architect - GlaxoSmithKline - Upper Providence, PA   
Experience and understanding of various third party vendor solutions such BI reporting solutions, ETL solutions, and search engines....
From GlaxoSmithKline - Sun, 11 Jun 2017 06:59:54 GMT - View all Upper Providence, PA jobs
          Big Data Solution Architect - GlaxoSmithKline - Upper Providence, PA   
Programming languages like Java, Scala, C++. Job Category - Enterprise Architecture:. Experience designing and building complex, high performance platforms to...
From GlaxoSmithKline - Sun, 11 Jun 2017 06:59:54 GMT - View all Upper Providence, PA jobs
          DevOps Engineer - GlaxoSmithKline - Upper Providence, PA   
Big Data Experience: • 4+ years of administration experience on Big Data Hadoop Ecosystem • In-depth knowledge of capacity planning, performance management...
From GlaxoSmithKline - Fri, 19 May 2017 18:39:00 GMT - View all Upper Providence, PA jobs
          Big Data. Small Brains.   

          El Internet de las cosas y su impacto en las ciudades del futuro   
Las compañías tienen en su haber una cantidad masiva de datos y de su habilidad en la gestión de ellos dependerá su éxito o fracaso empresarial. El ya famoso Big Data permite convertir los datos en...
          Big Data Developer   

          Hops and Tensorflow enabling rapid machine learning   


Distributed Tensorflow-as-a-Service now available on SICS ICE
Starting this June, we are offering Tensorflow as a managed service on the Hopsworks platform, hosted at the SICS ICE research datacenter facility in Luleå. 

hadoop, hops, data center, SICS ICE, big data, machine learning

RISE SICS
          Enterprise Performance Management Manager - Verizon - Basking Ridge, NJ   
Hadoop, HDFS, Hive, HBase, Machine Learning, Predictive and Prescriptive Analytics, Oracle Big Data Discovery....
From Verizon - Thu, 29 Jun 2017 10:58:49 GMT - View all Basking Ridge, NJ jobs
          Companies 'won’t have a choice' about using Big Data for ID management   

Ben Bulpett, director of Enline, spoke to Computerworld UK at Oracle OpenWorld
By Derek du Preez | Computerworld UK | Published 12:47, 04 October 12

Enterprises will be forced to invest in Big Data technologies to monitor employee behaviour in order to better control remote access to company systems, which is becoming a problem thanks to the ongoing surge in consumerisation.


          Big Data Developer   

          Web-COSI Presents at the Q2016 Conference in Madrid   
Web-COSI recently presented at the Q2016 Conference, Madrid 1-3 June – an important biannual event supported by Eurostat to present and discuss the progress and development of quality in official statistics. During the panel session “Big Data Oriented Systems”, Donatella…
          Sr Software Engineer ( Big Data, NoSQL, distributed systems ) - Stride Search - Los Altos, CA   
Experience with text search platforms, machine learning platforms. Mastery over Linux system internals, ability to troubleshoot performance problems using tools...
From Stride Search - Tue, 04 Apr 2017 06:25:16 GMT - View all Los Altos, CA jobs
          A Real Estate Startup Awarded as "Real Estate Tech Broker of the Year" Builds Intelligence Based Personalized Recommendation System for its Customers   

[India], July 1 (ANI-BusinessWireIndia): zVestais the real estate marketplace dedicated to empowering consumers with data, inspiration and knowledge around the place and connecting them with the best local professionals who can help proving value added services. zVesta aims to organize unorganized real estate industry and provide standard ERP to the builders, developers and to the consumers.

The company came into operation in April, 2015. It is intelligence based personalized recommendation system with predictive and forecasting model for buy, sale or rent, identification of hot locations and its amenities. The company has also come up with an algorithm development and integration into webpage long with sentiment analysis for opportunities in reality business.

The company with a vision to drive transparency in real estate by building living index for homes has already succeeded in having 176+ Builders, 656 Projects and a User base of 2302.

Speaking on the occasion Mr. Rajan Danng, Founder and MD, zVesta said, "Indian real estate will witness a paradigm shift with Digitization, artificial intelligence, big data, new construction methods, augmented reality will all become a new reality. Under RERA rules, builders and real estate agents have been given three months to register their projects with the regulator or face penalty. Transparency would for