[آموزش] دانلود Pluralsight Exploring and Analyzing Data with QlikView - آموزش کشف و آنالیز داده ها با کلیک ویو   

دانلود Pluralsight Exploring and Analyzing Data with QlikView - آموزش کشف و آنالیز داده ها با کلیک ویو

زمانی ایجاد داشبورد ها و تجزیه و تحلیل داده ها برای هر خروجی چند ماه طول می کشید، و سرعت تصمیم گیری سازمانی را پایین می آورد اما با کلیک ویو می توان آن را در چند دقیقه انجام داد ...
دانلود Pluralsight Exploring and Analyzing Data with QlikView - آموزش کشف و آنالیز داده ها با کلیک ویو ...

مطالب مرتبط:



لینک های مفید: خرید کارت شارژ, شارژ مستقیم, پرداخت قبض, خرید لایسنس آنتی ویروس, تبلیغات در اینترنت, تبلیغات اینترنتی
© حق مطلب و تصویر برای پی سی دانلود محفوظ است | لینک دائم | فرستادن به del.icio.us
همین حالا مشترک این پایگاه شوید!


          Fill in a Google Spreadsheet with Data by brittpols   
Hey There, I'm looking for someone who could help me do a little web digging and fill out the researched data into a Google Sheet. I'm looking for 150 names, personal e-mail addresses and Twitter links... (Budget: €8 - €30 EUR, Jobs: Data Entry, Data Mining, Excel, Research, Web Scraping)
          Ethereum hoax led to $4B in losses   

The price of cryptocurrency Ethereum dropped dramatically following news of its creator’s death. One problem: he wasn’t dead. A post on 4Chain claimed Vitalik Buterin — creator of blockchain technology Ethereum and currency Ether — died in a fatal car accident. Following news of what was later confirmed to be a hoax, the price of Ether went from $317 to $286 in moments, before bottoming out at around $216. Buterin dispelled any notion he was dead with a selfie. In it, he’s holding up blockchain-based data mined from days after his “death,” to prove was around. Still, this wiped out…

This story continues at The Next Web
          Fill in a Google Spreadsheet with Data by brittpols   
Hey There, I'm looking for someone who could help me do a little web digging and fill out the researched data into a Google Sheet. I'm looking for 150 names, personal e-mail addresses and Twitter links... (Budget: €8 - €30 EUR, Jobs: Data Entry, Data Mining, Excel, Research, Web Scraping)
          Data Mining Your PDM Vault   

EngineersRule breaks down how to create reports with SOLIDWORKS PDM.

The post Data Mining Your PDM Vault appeared first on Engineers Rule.


          Data mining analyst (Part - Arsh Biotech - India   
Essential Qualifications: Bachelors in any of the Life Sciences (Students undertaking Bachelors may also apply). Excellent written communication skills and
From Arsh Biotech - Fri, 07 Apr 2017 10:58:14 GMT - View all India jobs
           Big Data : In Election And In Business Creates Big Impact   

The US Election results and the process have created worldwide impact. Not only it was noticed for electing someone for arguably the most powerful office in the world, it brings along with it many innovations and advances. In 2008, when Mr. Obama won the elections for the first time, it was very clear that technology played a substantial role in his assuming office. We saw in 2008, that online world was leveraged in a big way in the campaigns for a very successful outcome,. In the just concluded 2012 election, clearly data, data insights and data centric predictions played a very big role in shaping the election outcome. Lot of deserved kudos went in the direction of Nate Silver for his super accurate predictions of the election results based on data insights. Many people looked at it from different perspectives. Media industry focused on how works like this will in an of itself influence the media coverage of elections and assessment of preference trends in election. Nate is the author of Amazon best-selling book, “The Signal and the Noise: The Art and Science of Prediction“. In the days leading up to the election, he was on every major media show, explaining how a detailed analysis of huge amounts of data, distilled from many different sources, enabled him and his team to predict with a fair degree of confidence and certainty what would happen district by district in the US elections (It’s actually a great reward to see this appearance of Nate Silver, on Stephen Colbert’s story show, reported by the LA Times). Very clearly, he was accurate to the last level of detail, in an election when the swings were noticed by both and in the days close to the election, the “momentum vote ” of the challenger was supposed to be mucking the trends.

A lovely article by John McDermott at AdAge brings out that Silver’s work will help transform the shift the “touch and feel aspects” of reporting to reporting that is anchored in data - facts & statistics. The article quotes ComScore’s Online traffic analyst Andrew Lipsman as saying , “Now that people have seen [data analysis centered political analysis] proven over a couple of cycles, people will be more grounded in the numbers.” Chatter in the online world quoting Bloomberg as the source suggested that , Barack Obama’s site was placing 87 tracking cookies on people’s computers who access the site. Mitt Romney’s site was placing 48 tracking cookies on people’s compute. Tarun Wadhwa reports at Forbes that the power of big data has finally been realized in the US political process:

“Beyond just personal vindication, Silver has proven to the public the power of Big Data in transforming our electoral process. We already rely on statistical models to do everything from flying our airplanes to predicting the weather. This serves as yet another example of computers showing their ability to be better at handling the unknown than loud-talking experts. By winning ‘the nerdiest election in the history of the American Republic,’ Barack Obama has cemented the role of Big Data in every aspect of the campaigning process. His ultimate success came from the work of historic get-out-the-vote efforts dominated by targeted messaging and digital behavioral tracking.” This election has proven that the field of “political data science” is now more than just a concept – it’s a proven, election-winning approach that will continue to revolutionize the way campaigns are run for decades to come. It is common knowledge that the campaign had been heavily leveraging the web platform in very many sophisticated ways. The campaign spectacularly succeeded in integrating political infrastructure with the web infrastructure that they managed to create. A peer-to-peer, bottoms up campaign seemed to be the strategy that finally delivered results. Volunteer participation, feedback synthesis and citizen vote drives were successfully brought out in massive scale hitherto unknown with the web platform. The campaign heavily shaped by the power of social networks and internet energized the youth power in unimaginable ways signifying the triumph of technology power. It’s a treat to watch : Mobile, Social and Big Data coming together and making an impact in this presidential election 2012.

Let’s look at the complexities involved in this exercise : There was a notable shift of demographics in America resulted in the traditional vote bases being less influential (this trend will continue dramatically in the future) – the absolute numbers may not have come down but the proportion in the votable base lowered somewhat –leaving the destiny in the hands of newly emerging swing voter base. Technology played a significant role in doing the rigorous fact checking – imagine during a presidential debate – typical citizens were looking at fact checking analysis in their other screens while watching the debate on the television. Pew research found that many were looking at dual screens while watching the debate. All well, till one looks at the paradox here – as more and more effort is made and money is spent to flood the media with political messages, the impact is significantly less, as people don’t rely on a single news source. Many American homes today are getting to embrace the “four screen” world (TV, laptop, tablet and phone, all use in tandem for everything in our lives) and so the ability to create impact on any promotion is actually becoming tougher and tougher (to create positive impact).

This is observed along with the fact that the U.S is also undergoing a deep structural and institutional change, affecting every walk of the American Life. While the online world is growing, it’s a common citing in the cities and downtowns where one can see established chains closing shops, unable to hold on to competition striking at them from the cyberworld. Trends like this clearly influence the economic role played by different industries, trends in wealth creation, job creation, city growth etc. Younger voters are more clued by default to these changing trends and their impact and so begin to think of their prospects from a different prism compared to older voters, who generally hold conventional views and so this further creates a deeper strata within the society.

Time Magazine has Michael Scherer doing an in-depth assessment on the role big data and data mining played in Obama’s campaign as well. Campaign manager Jim Messina, Scherer writes, “promised a totally different, metric-driven kind of campaign in which politics was the goal but political instincts might not be the means” and employed a massive number of data crunchers to establish an analytics edge to the campaign. The campaign team put together a massive database that pulled information from all areas of the campaign — social media, pollsters, consumer databases, fundraisers, etc. — and merged them into one central location. The current US President’s (Mr.Obama) campaign believed that biggest institutional advantage over its opponent’s campaign was its data and went out of its way to keep the data team away from the glare and made them work in windowless rooms and each of the team members were given codenames. That in and of itself signifies the importance the campaign attached to “Data – Big Data”- that’s!

Scherer adds: “The new megafile didn’t just tell the campaign how to find voters and get their attention; it also allowed the number crunchers to run tests predicting which types of people would be persuaded by certain kinds of appeals.” Scherer’s piece is an astoundingly fascinating look at how data was put to use in a successful presidential campaign. The election results are in a way a big victory for the nerds and big data. Similarly, some time back there was a sensational article on how Target figured a teenage girl was pregnant even before her father could find it. Inside enterprises, there must be big advocates to create frameworks to get to we are big advocates of the “know everything” through the world of data and align the business to succeed.

Large-scale data gathering and analytics are quickly becoming a new frontier of competitive differentiation. While the moves of online business leaders like Amazon.com, Google, and Netflix get noted, many traditional companies are quietly making progress as well. In fact, companies in industries ranging from pharmaceuticals to retailing to telecommunications to insurance have begun moving forward with big data strategies recently. Inside business enterprises, there’s a similar revolution happening – collection of very fine grained data and making them available for analyses in near real time. This helps enterprises learn about the preferences of an individual customer and personalize the offerings for that particular customer /unique customer experience that would make them come back again and again to do more business. Practically speaking, one of the largest transformations that has happened to large enterprises, has involved implementing systems, like ERP, enterprise resource planning; CRM, customer relationship management; or SCM, supply chain management—those large enterprise systems that companies have spent huge swathe of dollars on. These systems typically manage operations extremely well and then set the stage for enterprises to gain business intelligence and learn how they could be managed differently. That’s where Big Data frameworks come in handy and it’s up to business now to seize that opportunity and take advantage of this very fine-grained data that just didn’t exist in similar forms previously. Too few enterprises today fully grasp big data’s potential in their businesses, the data assets and liabilities of those businesses, or the strategic choices they must make to start leveraging big data. By focusing on these issues, enterprises can help their organizations build a data-driven competitive edge, which in this age is clearly a very powerful determinant of success.


          Science 2.0: New online tools may revolutionize research   
[Excellent article on how Web 2.0 tools are transforming science. The 2 projects mentioned have been funded by CANARIE in the latest NEP program amongst a total of 11 similar projects . For more examples of how web 2.0 is revolutionizing science please see my Citizen Science Blog. Thanks to Richard Ackerman for some of the FriendFeed pointers. Some excerpts from CBC website– BSA]

http://www.cbc.ca/technology/story/2009/01/08/f-tech-research.html

Citizen Science
http://citizen-science.blogspot.com/

CANARIE NEP program
http://www.canarie.ca/funding/nep/eoi.html

Described as an extension of the internet under the ocean, the Venus Coastal Observatory off Canada's west coast provides oceanographers with a continuous stream of undersea data once accessible only through costly marine expeditions. When its sister facility Neptune Canada launches next summer, the observatories' eight nodes will provide ocean scientists with an unprecedented wealth of information.
Sifting through all that data, however, can be quite a task. So the observatories, with the help of CANARIE Inc., operator of Canada's advanced research network, are developing a set of tools they call Oceans 2.0 to simplify access to the data and help researchers work with it in new ways. Some of their ideas look a lot like such popular consumer websites as Facebook, Flickr, Wikipedia and Digg.
And they're not alone. This set of online interaction technologies called Web 2.0 is finding its way into the scientific community.
Michael Nielsen, a Waterloo, Ont., physicist who is working on a book on the future of science, says online tools could change science to an extent that hasn't happened since the late 17th century, when scientists started publishing their research in scientific journals.
One way to manage the data boom will involve tagging data, much as users of websites like Flickr tag images or readers of blogs and web pages can "Digg" articles they approve. On Oceans 2.0, researchers might attach tags to images or video streams from undersea cameras, identifying sightings of little-known organisms or examples of rare phenomena.
The Canadian Space Science Data Portal (CSSDP), based at the University of Alberta, is also working on online collaboration tools. Robert Rankin, a University of Alberta physics professor and CSSDP principal investigator, foresees scientists attaching tags to specific data items containing occurrences of a particular process or phenomenon in which researchers are interested.
"You've essentially got a database that has been developed using this tagging process," he says.
If data tagging is analogous to Flickr or Digg, other initiatives look a bit like Facebook.
Pirenne envisions Oceans 2.0 including a Facebook-like social networking site where researchers could create profiles showing what sort of work they do and what expertise they have. When a scientist is working on a project and needs specific expertise — experience in data mining and statistical analysis of oceanographic data, for example — he or she could turn to this facility to find likely collaborators.
"It's a really exciting time," Lok says, "a really active time for Science 2.0."

it got lots of buzz on FriendFeed, there are multiple mentions of it

http://friendfeed.com/e/b2dc0a15-e076-4d2b-8771-d0e37733077e/Science-2-0-on-CBC/

http://friendfeed.com/e/7649d8b6-9f28-424e-9344-875bf2abfc25/Several-conference-attendees-are-quoted-in-this/

(The conference Eva's referring to is Science Online 2009.)

http://friendfeed.com/e/dcfc7f91-82e9-5aac-c7a4-b2cfea8f6b40/Science-2-0-New-online-tools-may-revolutionize/

http://friendfeed.com/e/333a5973-3aab-a3f7-0b84-993e20b94ce4/Science-2-0-New-online-tools-may-revolutionize/

http://friendfeed.com/e/c1824ca1-93d5-452a-b404-199b5d8e04d3/Nature-Network-in-the-news-Expression-Patterns/

http://friendfeed.com/e/9a2d1d68-fb76-c5a6-c43e-3909d7bebec4/Science-2-0-article-quotes-four-ScienceOnline-09/

http://friendfeed.com/e/556befb8-bfa0-4c3e-8c7a-63b94243bf5e/Science-2-0-article-quotes-four-ScienceOnline-09/

http://friendfeed.com/e/03dca000-9f33-849c-b40a-b22178339428/CBCnews-Article-on-Science2-0/
          Notes for WikiCite 2017: Wikispecies reference parsing   

Wikispecies logo svgIn preparation for WikiCite 2017 I'm looking more closely at extracting bibliographic information from Wikispecies. The WikiCite project "is a proposal to build a bibliographic database in Wikidata to serve all Wikimedia projects". One reason for doing this is so that each factual statement in WikiData can be linked to evidence for that statement. Practical efforts towards this goal include tools to add details of articles from CrossRef and PubMed straight into Wikidata, and tools to extract citations from Wikipedia (as these are likely to be sources of evidence for statements made in Wikipedia articles).

Wikispecies occupies a rather isoldated spot in the Wiikipedia landscape. Unlike other sites which are essentially comprehensive encyclopedias in different languages, Wikispecies focusses on one domain - taxonomy. In a sense, it's a prototype of Wikidata in that it provides basic facts (who described what species when, and what is the classification of those species) that in principle can be reused by any of the other wikis. However, in practice this doesn't seem to have happened much.

What Wikispecies has become, however, is a crowd-sourced database of the taxonomic literture. For someone like me who is desparately gathering up bibliographic data so that I can extract articles from the Biodiversity Heritage Library (BHL), this is a potential goldmine. But, there's a catch. Unlike, say, the English language Wikipedia which has a single widely-used template for describing a publication, Wikispecies has it's own method of representing articles. It uses a somewhat confusing mix of templates for author names, and then uses barely standardised formatting rules to mark out parts of a publication (such as journal, volume, issue, etc.). Instead of a single template to describe a publication, in Wikispecies a publication my itself be described by a unique template. This has some advantages, in that the same reference can be transcluded into multiple articles (in other words, you enter the bibliographic details once). But this leaves us with many individual templates with multiple, idiosyncratic styles of representing bibliographic data. Some have tried to get the Wikispecies community to adopt the same template as Wikipedia (see e.g., this discussion) but this proposal has met with a lot of resistance. From my perspective as a potential consumer of data, the current situation in Wikispecies is frustrating, but the reality is that the people who create the content get to decide how they structure that content. And understandably, they are less than impressed by requests that might help others (such as data miners) at the expense of making their own work more difficult.

In summary, if I want to make use of Wikispecies I am going to need to develop a set of parsers than can make a reasonable fist of parsing all the myriad citation formats used in Wikispecies (my first attempts are on GitHub). I'm looking at parsing the references and converting them to a more standard format in JSON (I've made some notes on various bibliographic formats in JSON such as BibJSON and CSL-JSON). One outcome of this work will be, I hope, more articles discovered in BHL and hence added to BioStor), and more links to identifiers, which could be fed back into Wikispecies. I also want to explore linking the authors of these papers to identifiers, as already sketched out in The Biodiversity Heritage Library meets Wikidata via Wikispecies: adding author identifiers to BioStor.


          Professional-Data Scientist-BIG DATA - Plano, TX   
Overall Purpose: The Data Scientist will be responsible for designing and implementing processes and layouts for complex, large- scale data sets used for modeling, data mining, and research purposes. Key Roles and Responsibilities: This position will work closely with...
          Top 10 Data Mining Algorithms That You Can Easily Implement On Weka/Tanagra   

K-means K-means is a cluster analysis technique that makes a group of similar elements from a set of objects by creating k groups. We validate the results by comparing the groups (called clusters) with a predefined classification.  K-means is an easy algorithm to implement on Weka as well as Tanagra. Naïve-Bayes Naïve-Bayes is comparatively a faster classification algorithm. As the name suggests, it works on the Bayes theorem of probability. The aim is to predict the class of unknown data module. We can implement the algorithm for various applications like real-time prediction, text classification or recommendation system, etc. KNN K-Nearest Neighbour is a data mining technique that works on a historical data instances with their known output values to predict outputs for new data instances. By implementing this algorithm either on Weka or Tanagra, we can get the desired result. ID3 Decision trees are the supervised learning algorithms which are easy to understand as well as easy to implement. The goal is to achieve perfect classification from given training data set with minimal decisions. ID3 uses Information Gain concept for its implementation. Regression In this algorithm, we have a set of binary or continuous independent variables. Using this set, we […]

The post Top 10 Data Mining Algorithms That You Can Easily Implement On Weka/Tanagra appeared first on Techyv.com.


          Oracle considered buying Peter Thiel's Palantir last year — and an ex-Disney exec set up the meeting (ORCL)   

Larry Ellison wine glass

Larry Ellison, Oracle's founder, chief technology officer, and largest shareholder, met with Palantir chairman Peter Thiel for lunch in 2016 to talk about Oracle buying Thiel's company, Bloomberg reported. 

Details about the secret meeting came out of a court testimony from Palantir investor Marc Abramowitz, who is suing Palantir over allegations that he was prevented from selling his stake in the data mining company. 

Ellison and Thiel's lunch was set up by Abramowitz and Michael Ovitz, a former executive at Walt Disney, to help the two companies broker a deal, according to Abramowitz's testimony. But the buyout never went through. 

Abramowitz also testified that Goldman Sachs pitched Palantir on the idea of going public in 2015 with a $30 billion offering, Bloomberg reported.

Oracle declined a request from Business Insider for comment. Palantir did not immediately respond to a request for comment. 

SEE ALSO: Oracle's blow-out earnings caused over 20 Wall Street analysts to raise price targets

Join the conversation about this story »

NOW WATCH: Ivanka Trump's Instagram put her at the center of a controversy over her lavish art collection


          Financial Analyst I   
FL-Coral Gables, Qualifications: OVERVIEW The Financial Planning & Analysis Analyst is a very analytical and hands-on role working within the LatAm FP&A team to provide support in various financial processes and reports for monthly forecasting, budgeting and LRP. This role will help research and analyze financial data and will spend a considerable amount of time performing G/L account analysis and data mining. Thi
          Comment on Worldwide protest against surveillance: Freedom not Fear 2008 by JS   
Your threat are those afilliated with and in UK GCHQ. Conducting the denial of Civil, Constitutional Human rights, Transnational organised white collar cyber crime, at public expense, these sex predators abusing technology for Video voyeurism, Cyb r surveillance ID/IP theft, Data mining, Cyber terrorism, exploit the public for information warfare As a 20 year subject of abuse, having operational methods, names, Fed law violations, evidence, the people are global threats to security and safety Unless this issue is confronted in Cours of law, arrests, prosecutions, imprisonment done, the people will deny basic human rights to this and next generation Willing to speak against abuse of power
          Bus Info Developer Cons Sr (Job #5738)   
SLAIT Consulting is currently seeking a Bus Info Developer Cons Sr. for a client in the Virginia Beach, VA area.

SUMMARY:
Viewed as an expert in the development and execution of data mining analyses. Experience or exposure to Informatica, Teradata, Tidal, SQL/SSIS, Perl, Cygwin, SDLC, Agile, Atlassian/JIRA, regression testing, and unit testing

MAJOR JOB DUTIES AND RESPONSIBILITIES:
Undertakes complex assignments requiring additional specialized technical knowledge. Develops very complex and varied strategic report applications from a Data Warehouse. Establishes and communicates common goal and direction for team. Establishes and maintains advanced knowledge of data warehouse database design, data definitions, system capabilities, and data integrity issues. Acts as a source of direction, training and guidance for less experienced staff. Monitors project schedules and costs for own and other projects. Develops and supports very complex Data Warehouse-related applications for business areas requiring design and implementation of database tables. Conducts training on use of applications developed.

EDUCATION/EXPERIENCE:
Requires a BS/BA degree; 6 years experience; or any combination of education and experience, which would provide an equivalent background.

SKILLS:
Expert level PC, spreadsheet, and database skills, as well as experience in standard Business Information tools and programming/query languages is also required. Ability to communicate effectively with multiple levels within the organization. This job is focused on spending time thinking about programming and how it would be used to design solutions as compared to the Bus Info Developer Consultant job.

Why SLAIT?
We have been growing since 1990 with offices in Virginia, Gaithersburg, MD., New York, Raleigh, NC, and Austin TX. For over twenty three years, we have delivered customized, creative IT solutions for customers in the commercial, and state and local government sectors.
*Staff Augmentation *Managed Services *IT Outsourcing *IT Consulting

Thank you for your consideration, please submit your resume today! Visit us at www.slaitconsulting.com

**Must be able to work for any employer in the United States. No Visa sponsorship.**

SLAIT Consulting is an Equal Opportunity Employer
          Programmer Analyst (Job #5681)   
SLAIT Consulting is currently seeking a Programmer Analyst for a client in the Virginia Beach, VA area.

SUMMARY:
Viewed as an expert in the development and execution of data mining analyses.

MAJOR JOB DUTIES AND RESPONSIBILITIES:
Undertakes complex assignments requiring additional specialized technical knowledge. Develops very complex and varied strategic report applications from a Data Warehouse. Establishes and communicates common goal and direction for team. Establishes and maintains advanced knowledge of data warehouse database design, data definitions, system capabilities, and data integrity issues. Acts as a source of direction, training and guidance for less experienced staff. Monitors project schedules and costs for own and other projects. Develops and supports very complex Data Warehouse-related applications for business areas requiring design and implementation of database tables. Conducts training on use of applications developed.

Additional Requirements(.net and data): -- Strong TSQL/PL/SQL Skills (Oracle and MS SQL) -- Strong ETL Skills, SSIS, SQL Loader -- some .net experience -- Data Migrationn Experience Prefered -- Informatica Experience nice to have.

EDUCATION/EXPERIENCE:
Requires a BS/BA degree; 6 years experience; or any combination of education and experience, which would provide an equivalent background. SKILLS: Expert level PC, spreadsheet, and database skills, as well as experience in standard Business Information tools and programming/query languages is also required. Ability to communicate effectively with multiple levels within the organization. This job is focused on spending time thinking about programming and how it would be used to design solutions as compared to the Bus Info Developer Consultant job.

Why SLAIT?
We have been growing since 1990 with offices in Virginia, Gaithersburg, MD., New York, Raleigh, NC, and Austin TX. For over twenty three years, we have delivered customized, creative IT solutions for customers in the commercial, and state and local government sectors.
*Staff Augmentation *Managed Services *IT Outsourcing *IT Consulting

Thank you for your consideration, please submit your resume today! Visit us at www.slaitconsulting.com

**Must be able to work for any employer in the United States. No Visa sponsorship.**

SLAIT Consulting is an Equal Opportunity Employer
          Quantitative Manager (Job #6454)   
The successful candidate will be a creative, resourceful and experienced with Agile methods and techniques to implement Scrum; and must be a self-starter and have strong background in statistics, machine learning and big data including information retrieval, natural language processing, algorithm analysis, and real-time distributed computing methods. As a quantitative manager you have personnel management responsibility and can exercise your talents to lead teams of expert data scientists and engineers on multiple assignments and projects in a disciplined and fast paced environment. You must be confident to tackle complex engineering problems, and will be expected to work on design algorithms and codify large-scale statistical models for real time processing based on our analytics architecture.

Advanced Analytics
The Advanced Analytics service area is comprised of professionals that possess competency and experience in the areas of risk management, business and operational targeting processes, computational linguistics, machine learning, knowledge discovery, semantic engineering, probabilistic and statistical data mining. Advanced Analytics professionals use these skills to assess, analyze, and improve the effectiveness and efficiency of targeting methods, operational control processes, offer recommendations to improve operations, and assist clients with enterprise risk and compliance activities.

Requirements
Minimum Qualifications
• Excellent communication skills and ability to understand and communicate business requirements
• Excellent analytical and problem-solving skills
• Strong programming skills and experience in SPSS, SAS, R, Matlab and similar toolset and deep understanding of exploratory data analysis
• Background in statistical techniques, NLP and machine learning, predictive modeling, data mining, statistical inference and classification algorithms
• Develop statistical models and analytical methods to predict, quantify, and forecast multiple business concerns and provide performance reporting capabilities
• Experience in modeling techniques, statistical analysis, propensity score matching, multivariate analysis, logistic regression, time series, survival analysis, decision trees, and neural networks
• BA/BS in Statistics, Mathematics, CS or related technical field, and MS or PhD preferred
• Strong sense of passion, teamwork and responsibility
• Willingness to travel and flexibility to commute to clients in the Washington D.C. metro area as needed
          Data Architect contract job in Reston, VA   
<span>Seeking a Data Architect for a contract job in Reston, VA. <br>&nbsp;<br>Job Description:<br>The Senior Data Architect is responsible for data strategies, architectures and technical plans that are aligned with the company&rsquo;s mission, strategy, goals, and objectives. This particular position will focus on creation of Data Architecture(s) for complex system and data integration programs.<br>To accomplish this, this position will both lead and work collaboratively with business and technology stakeholders to establish and maintain the enterprise data reporting architecture, strategy and roadmap. The Senior Data Architect will be a key participant within the Enterprise Architecture (EA) Planning process, providing domain expertise.<br>Succeeding as a Senior Data Architect will require technical knowledge, experience necessary when building and integrating solutions across multiple technologies and lines of business in an enterprise environment. This requires maintaining an awareness and knowledge of the emerging trends in technology, and its usage with the industry to apply in the establishment of new and innovative data reporting strategies to support the company&rsquo;s business needs.<br>In addition, successful candidates will be innovative thinkers and passionate in their pursuit of world class data technology enabled business solutions. Candidates must also be strong leaders, active partners, and able to communicate clearly and effectively to both technical and executive level audiences. They must work closely with other data disciplines to establish data best practices for analysis, modeling, designing, and building business information reporting and analytics systems. This position requires an ability to build rapport and credibility with management and software development teams; and the ability to document solutions and architectures. Successful candidates must be action oriented self-starters, capable of solving complex technical problems both independently and in a team environment.<br>&nbsp;<br>Responsibilities<br>&bull; Responsible for design, build, and maintenance of the company&#39;s data architectures for complex, highly visible data integration solutions that are both flexible and scalable. <br>&bull; Function as a senior consultant and contributor on enterprise projects to provide data analysis, design, modeling, and implementation support to project teams, database development group(s), IT, and the enterprise and ensure that reporting solutions are aligned with our architectural direction. <br>&bull; Organize and lead teams to develop and refine conceptual data information models. Extend conceptual models with business rules to develop logical data models. Develop analytical models using statistical analysis tools, data mining, and other tools to create high-level data architecture diagrams documenting system and data flows. Provide guidance and support in the development of information value chain analysis, data quality metrics and metadata capture. Evaluate alternatives for integrating legacy data sources with enterprise relational models and strategies for transactional and analytical systems to effectively share data.<br>&bull; Establish and maintain alignment of the data architecture to the company&rsquo;s business strategy, goals and objectives, and the defined architectures of other key enterprise architecture domains (e.g. application, data, security, etc.).<br>&nbsp;<br>&nbsp;<br>Required Skills:<br>&bull; BS degree in Computer Science or related field/equivalent experience <br>&bull; 12+ years of IT experience including development of large scale, complex data architectures <br>&bull; Demonstrated ability to architect and model mission critical BI, OLTP, OLAP, ETL, solutions leveraging multiple DBMS technologies (Erwin, Oracle, Informatica, Cognos)<br>&bull; Demonstrated ability to architect and model data services using SOA principles<br>&bull; Demonstrated leadership skills with experience in leading and mentoring technical staff in the development and usage of data architectures and solutions.<br>&bull; Demonstrated experience translating business and technical requirements into comprehensive data reporting strategies and analytic solutions.<br>&bull; Demonstrated ability to develop and maintain good customer working relationships.<br>&nbsp;<br>Extensive background and expertise in developing and managing data technologies, technical operations, reusable data services, and related tools and technologies, <br>&bull; Working knowledge of and experience with other enterprise domains (application, infrastructure, web, ecommerce, ERP etc.), IT Governance frameworks, concepts, methodologies and modeling techniques. <br>&bull; Demonstrated ability to adequately plan and meet delivery objectives and maintain adequate service levels in a highly dynamic, complex environment.<br>&nbsp;<br>&nbsp;<br>Preferred Skills:<br>&bull; Demonstrated critical thinking skill, including abilities in analysis and problem solving.<br>&bull; Experienced in &ldquo;Systems Thinking&rdquo; &ndash; the ability to break problems into manageable pieces, and to see how the pieces interact with one another and can be assembled into an integrated, functioning, &ldquo;whole&rdquo; system.<br>&bull; Excellent verbal and written communication capabilities, and skillful at facilitation and negotiation. <br>&bull; Effective team player with strong emotional intelligence &ndash; self awareness, confidence, ability to manage conflict, empathy.<br>&bull; Ability to effectively respond to technical questions and issues &ndash; i.e., effective in communicating complex technology concepts to diverse (both technical and non-technical) audiences at all levels in the organization.<br>&bull; Passion for technology, with an ability to understand and assess new technologies, and their potential applicability to business needs in an efficient, effective manner.<br>&nbsp;<br>&nbsp;<br>Education/Certifications:<br>&bull; BS degree in Computer Science or related field/equivalent experience <br>&bull; 12+ years of IT experience including development of large scale, complex data architectures<br>&nbsp;<br></span>
          Info Developer Cons Sr (Job #5569)   
SLAIT Consulting is currently seeking an Info Developer Cons Sr. for a client in the Virginia Beach, VA area.

CLIENT COMMENTS:

Additional Requirements(.net and data): -- Strong TSQL/PL/SQL Skills (Oracle and MS SQL) -- Strong ETL Skills, SSIS, SQL Loader -- some .net experience -- Data Migration Experience Preferred -- Informatica

SUMMARY:

Viewed as an expert in the development and execution of data mining analyses.

MAJOR JOB DUTIES AND RESPONSIBILITIES:

Undertakes complex assignments requiring additional specialized technical knowledge. Develops very complex and varied strategic report applications from a Data Warehouse. Establishes and communicates common goal and direction for team. Establishes and maintains advanced knowledge of data warehouse database design, data definitions, system capabilities, and data integrity issues. Acts as a source of direction, training and guidance for less experienced staff. Monitors project schedules and costs for own and other projects. Develops and supports very complex Data Warehouse-related applications for business areas requiring design and implementation of database tables. Conducts training on use of applications developed.

EDUCATION/EXPERIENCE:

Requires a BS/BA degree; 6 years? experience; or any combination of education and experience, which would provide an equivalent background. SKILLS: Expert level PC, spreadsheet, and database skills, as well as experience in standard Business Information tools and programming/query languages is also required. Ability to communicate effectively with multiple levels within the organization. This job is focused on spending time thinking about programming and how it would be used to design solutions as compared to the Bus Info Developer Consultant job.

Why SLAIT?

We have been growing since 1990 with offices in Virginia, Gaithersburg, MD., New York, Raleigh, NC, and Austin TX. For over twenty three years, we have delivered customized, creative IT solutions for customers in the commercial, and state and local government sectors.
*Staff Augmentation *Managed Services *IT Outsourcing *IT Consulting

Thank you for your consideration, please submit your resume today! Visit us at www.slaitconsulting.com

**Must be able to work for any employer in the United States. No Visa sponsorship.**

SLAIT Consulting is an Equal Opportunity Employer
          Oracle considered buying Peter Thiel's Palantir last year — and an ex-Disney exec set up the meeting (ORCL)   

Larry Ellison wine glassAP

Larry Ellison, Oracle's founder, chief technology officer, and largest shareholder, met with Palantir chairman Peter Thiel for lunch in 2016 to talk about Oracle buying Thiel's company, Bloomberg reported. 

Details about the secret meeting came out of a court testimony from Palantir investor Marc Abramowitz, who is suing Palantir over allegations that he was prevented from selling his stake in the data mining company. 

Ellison and Thiel's lunch was set up by Abramowitz and Michael Ovitz, a former executive at Walt Disney, to help the two companies broker a deal, according to Abramowitz's testimony. But the buyout never went through. 

Abramowitz also testified that Goldman Sachs pitched Palantir on the idea of going public in 2015 with a $30 billion offering, Bloomberg reported.

Oracle declined a request from Business Insider for comment. Palantir did not immediately respond to a request for comment. 

NOW WATCH: Ivanka Trump's Instagram put her at the center of a controversy over her lavish art collection


          Broadcasting method for broadcasting images with augmented motion data   
A broadcasting method for broadcasting images with augmented motion data, which may utilize a system having at least one camera, a computer and a wireless communication interface. The system obtains data from motion capture elements, analyzes data and optionally stores data in database for use in broadcasting applications, virtual reality applications and/or data mining. The system also recognizes at least one motion capture data element associated with a user or piece of equipment, and receives data associated with the motion capture element via the wireless communication interface. The system also enables unique displays associated with the user, such as 3D overlays onto images of the user to visually depict the captured motion data. Ratings, compliance, ball flight path data can be calculated and displayed, for example on a map or timeline or both. Furthermore, the system enables performance related equipment fitting and purchase.
          Operations Manager - 20 VIC Management - Calgary, AB   
The Angus tenant services program to data mine the stored information to assist with the. Operations Manager - The CORE....
From 20 VIC Management - Thu, 06 Apr 2017 06:24:55 GMT - View all Calgary, AB jobs
          Gemastik   
Gemastik
Hasil gambar untuk gemastik



Program mahasiswa nasional dalam bidang teknologi dan komunikasi
Merupakan kegiatan yang bertujuan untuk meningkatkan kualitas peserta didik dalam memajukan TIK  dan pemanfaatannya di  Indonesia .  kegiatan ini merupakan media penyalur kreativitas mahasiswa dalam pengembangan  teknlogi informasi

A.Kategori gemastik :
1.       Pemrograman
2.       Pengenbangan perangkat lunak
3.       Data mining
4.       Keamanan jaringan
5.       Sistem informasi
6.       Animasi
7.       Piranti cerdas embedded system
8.       Design  user aplikasi permainan
9.       E-Goverment
B. Pemrograman
·         Nilai berdasarkan kecepatan  dan ketepatan dari program
·         Diberi sejumlah permasalahan dan dalam waktu 3-5 jam harus sudah menyusun program
·         Selain diadu dalam kecepatan  dalam penulisan  programnya. Juga harus dituntut menemukan  algoritma yang tepat
·         Bahasa pemrograman yang digunakan antara lain
Ø  Java
Ø  C++
Ø  C#
C. Pengembangan peranagkat Lunak
Ø  Ide kreatif untuk memberikan saluran penyelesaian masalah di indonesia bentuk perangkat lunak
Ø  Mampu memberi  dampak  terhadap kemandirian dan kecerdasan masyrakat Indonesia
Ø  Harus dibuktikan dengan data
D. Data Mining
Menjadikan  learning  dan big data bagi solusi permasalahan yang membom manfaat bagi kepentingan manusia

E. Keamanan jaringan dan sistem Informasi
Ø  Sistem dirancang untuk mempunyai celah atau informasi tertentu yang  berakibat terhadap dimungkinkannya peretasan pada sistem tersebut
Ø  Laporan sebagai syarat syarat agar dapat membuktikan bahwa peserta mengajarkan pengguna sendiri
F.Animasi
Ø  Mengandung  kreatifitas dan inovatif untuk menciptakan masyarakat yang mandiri di Indonesia
Ø  Karya berbentuk fillm pendek dan bentuk digital animation
Ø  Unsur –unsur utama yang harus ada : cerita , karakter/tokoh
H. Design User Expriencess
Ø  Berorientasi  kepada kenyamanan dan kemudahan user ketika menggunakannya
Ø  Pengalaman yang didapat pengguna
I. Pengembangan Bisnis TIK :Memiliki Ide Bisnis , st tup , dan pengembangan usaha
J. Pengembangan aplikasi permainan:Dinilai dari pengumpulan proposal

K.Goverment : kategori baru aplikasi kepemerintahan

          Senior Manager, Data Engineering - Capital One - Chicago, IL   
Experience delivering business solution written in Java. Data mining, machine learning, statistical modeling tools or underlying algorithms....
From Capital One - Sat, 17 Jun 2017 18:47:28 GMT - View all Chicago, IL jobs
          Agile Software Developer - Subject Matter Expert (ASD-SME) (C) with Security Clearance - Metronome LLC - Springfield, VA   
Expertise with Informatica, Syncsort DMX-h and Ab Initio desired *. Expertise with machine learning, data mining and knowledge discovery desired *....
From ClearanceJobs.com - Sun, 04 Jun 2017 18:11:10 GMT - View all Springfield, VA jobs
          Datafication   

For poets who are into data mining, personalization, and privacy issues (who isn't?):
"After all, throughout The Formula, Dormehl signals that he wants us to think carefully about what it means to be human in an age where algorithms increasingly tell us who we are, what we want, and how we'll come to behave in the future. Indeed, Dormehl goes so far as to declare that our age is marked by a "crisis of self," where the Enlightenment conception of "autonomous individuals" is challenged by an algorithmic alternative. Individuals have become construed as "one categorizable node in an aggregate mass." --Evan Selinger, Los Angeles Review of Books
The Formula: How Algorithms Solve All Our Problems -- And Create More, Luke Dormehl, Perigee Books, 2014, pp. 288, ISBN 9780399170539

          5 Challenges Your Company Has to Overcome to Succeed in Data Mining   

Data lakes are failing and fast. They are not able to support the real time-to-market requirements of the new big data innovations. Many companies still think that data lakes are ineffective and expensive. Data Lakes to be a rich source of useful data for most companies. It is supposed to facilitate the collocation of data […]

The post 5 Challenges Your Company Has to Overcome to Succeed in Data Mining appeared first on SmartData Collective.


          Book Reviews: Principles of Data Mining. By David Hand, Heikki Mannila, and Padhraic Smyth.   
"Principles of Data Mining. By David Hand, Heikki Mannila, and Padhraic Smyth. MIT Press, Cambridge, MA, 2001. $50.00. xxxii+546 pp., hardcover. ISBN 0-262-08290-X. Is data mining the same as statistics? The distinguished authors of Principles of Data Mining struggle to make a distinction between the two subjects. In the end, what they have written is a fine applied statistics text." -- page 501
          HIMSS Washington Chapter & CHITA Education Session   
HIMSSWA & CHITA are proud to present: “Data” Topics to be covered: • Clinical data/data storing • Best practices • Data warehouse – data types/opportunities • Identifying the “right kind of data”. • Data mining methodology and techniques • Data mining applications • Predictive modeling Friday, June 29, 2011 • 8:30 am to 3:30pm Bellevue ...more about HIMSS Washington Chapter & CHITA Education Session
          ML Class Notes: Lesson 1 - Introduction   

I am taking the Machine Learning class at Coursera. These are my notes on the material presented by Professor Ng.

The first lesson introduces a number of concepts in machine learning. There is no code to show until the first algorithm is introduced in the next lesson.

Machine learning grew out of AI research. It is a field of study that gives computers the ability to learn algorithms and processes that can not be explicitly programmed. Computers could be programmed to do simple things, but doing more complicated things required the computer learn itself. A well posed learning program is said to learn some task if its performance improves with experience.

Machine Learning is used for a lot of things including data mining in business, biology and engineering; performing tasks that can't be programmed by hand like piloting helicopters or computer vision; self-customizing programs like product recommendations; and as a model to try to understand human learning.

Two of the more common categories of machine learning algorithms are supervised and unsupervised learning. Other categories include reinforcement learning and recommender systems, but they were not described in this lesson.

Supervised Learning

In supervised learning the computer taught to make predictions using a set of examples where the historical result is already known. One type of supervised learning tasks is regression where the predicted value is in a continuous range (the example given was predicting home prices). Other supervised learning algorithms perform classification where examples are sorted into two or more buckets (the examples given were of email, which can be spam or not spam; and tumor diagnosis which could be malignant or benign.)

Unsupervised Learning

In unsupervised learning, the computer must teach itself to perform a task because the "correct" answer is not known. A common supervised learning task is clustering. Clustering is used to group data points into different categories based on their similarity to each other. Professor Ng gave the the example of Google News, which groups related news articles, allowing you to select accounts of the same event from different news sources.

The unsupervised learning discussion ended with a demonstration of an algorithm that had been used to solve the "cocktail party problem", where two people were speaking at the same time in the same room, and were recorded by two microphones in different parts of the room. The clustering algorithm was used to determine which sound signals were from each speaker. In the initial recordings, both speakers could be heard on both microphones. In the sound files produced by the learning algorithm, each output has the sound from one speaker, with the other speaker almost entirely absent.


          Ethereum hoax led to $4B in losses   

The price of cryptocurrency Ethereum dropped dramatically following news of its creator’s death. One problem: he wasn’t dead. A post on 4Chain claimed Vitalik Buterin — creator of blockchain technology Ethereum and currency Ether — died in a fatal car accident. Following news of what was later confirmed to be a hoax, the price of Ether went from $317 to $286 in moments, before bottoming out at around $216. Buterin dispelled any notion he was dead with a selfie. In it, he’s holding up blockchain-based data mined from days after his “death,” to prove was around. Still, this wiped out…

This story continues at The Next Web

          Örneklerle Veri Madenciliği yazısına edita tarafından yapılan yorumlar   
cok net aciklanmis. Acaba bana data mining olan ve olmayan iki ornek verebilirmisiz . bununla ilgi bir odev hazirliyorum. yardimci olursaniz sevinirim
          DATA SCIENCE SOFTWARE DEVELOPER - IRELAND - Overstock.com - Ireland, WV   
Design, develop, and maintain data mining jobs in Java, Scala, Python, Hadoop, Spark, R, SQL, and/or other query language....
From Overstock.com - Fri, 23 Jun 2017 19:34:32 GMT - View all Ireland, WV jobs
          Radio Netwatcher vom 3.2.2017 – Wolfie Christl – Corporate surveillance, digital tracking, big data & privacy (33c3)   
Corporate surveillance, digital tracking, big data & privacy (33c3) Wolfie Christl talks from the new oil of data mining How thousands of companies are profiling, categorizing, rating and affecting the lives of billions Today virtually everything we do is monitored in some way. The collection, analysis and utilization of digital information about our clicks, swipes, […]
          Data Engineer with Scala/Spark and Java - Comtech LLC - San Jose, CA   
Job Description Primary Skills: Big Data experience 8+ years exp in Java, Python and Scala With Spark and Machine Learning (3+) Data mining, Data analysis
From Comtech LLC - Fri, 23 Jun 2017 03:10:08 GMT - View all San Jose, CA jobs
          Push to make federally funded research available to the public   
Two new public access bills and a directive from the White House have rekindled public access discussions in 2013. Here’s what’s at stake: increased access to federally funded research and accompanying data, and opportunities for new research using computational analysis (e.g. text and data mining).  Already in effect, the White House directive requires certain federal […]
          Online Marketing   

Online marketing refers to a set of powerful tools and methodologies used for promoting products and services through the Internet. Online Marketing Strategy includes: Search Engine Optimization Pay Per Click Campaign Email Marketing Data Mining in compliment with custom solution. Social Media Marketing How can online marketing works in Singapore? In Singapore, there are so […]

The post Online Marketing appeared first on Website Design.


          SIFAT FISIKA, KIMIA DAN KLASIFIKASI DARI BATUAN BEKU   
 Magma adalah material alam yang sebagian atau seluruhnya berwujud cair. Untuk batuan vulkanik kuno dan semua batuan plutonic yang terdapat pada permukaan karena proses erosi. Besal dari magma hanya dapat disimpulkan atau diasumsikan walaupun ada bukti struktur atau tekstur untuk dapat dikatakan sebagai tempat asalnya. Salah satu bukti tidak langsung yang terbaik datang dari lab petrologis, sejak James Hall melakukan penelitiannya hampir 200 tahun yang lalu, ia memiliki contoh magma pada suhu yang tepat, lalu mendingin dan mengkristalisasi magma ini menjadi batuan beku.

            Kebanyakan magma adalah cairan silikat , mengandung sekitar 75-100% Si02 (silika). Bagaimanapun, beberapa batuan beku langkah dapat terbentuk oleh kristalisasi dari magma dengan kadar silika yang sangat sedikit. Sebagai contoh karbonatilite dan natrokarbonatili adalah batuan intrusif dan ekstrusif yang di dominasi oleh Sodium, kalsium atau mineral magnesium karbonat dengan hanya sedikit silikat nelsonile adalah batu silika bebas magnetik. Ilmenit/apatite yang dikenal terbentuk dalam aliran lava atau sebagai pemisal dalam plutonik.

BENTUK FISIK DAN CIRI-CIRI DARI MAGMA

            Bagian dari magma silika sebagian besar dapat dilihat dari komposisisnya sebagai contoh, densitas dikendalikan oleh konsentrasi relatif dari komponen kimia dengan berat atom yang berbeda.

            Magma kaya akan elemen-elemen seperti kalsium,titanium dan utamanya besi memiliki densitas kekokohan yang lebih besar dari magma mengandung elemen-elemen ringan seperti silikon, aluminium dan sodium

            Kebanyakan mineral silikat mempunyai titik didih diatas 10000 C, yang membuat magmanya tetap cair. Kebanyakan komposisi magma ada selang waktu pengkristalan diatas beberapa ratus derajat celcius diantara kenampakan dari temperatur kristal yang pertama pada pendinginan dan pengkristalan terakhir dari sedikit persen cairan.

 Selang waktu itu sendiri dan peningkatan secara derastis keduanya bervariasi sebaik dengan tekanan magma. Kebanyakan magma berisi konsterasi yang kuat dari jenis-jenis yang mudah menguap seperti air atau karbon dioksida. Pada suhu yang tinggi dimana magma mencair secara total, unsur yang memudah menguap akhirnya benar-benar terurai dalam molekul cair.

Viskositas merupakan salah satu ciri-ciri yang paling penting dari kebanyakan cairan termaksud magma. Viskositas didefinisikan sebagai daya tahan untuk mengalir.

            Air memiliki kecenderungan untuk mengacaukan penggabungan tetrahedra dengan melemahkan ikatan Si/O, jadi kadar air yang tinggi akan mengurangi kekentalan magma seperti yang ditunjukkan Shaw (1965), 4% berat (wt%) H20 dilarutkan dalam magma asam kering mengurangi sekitar 108-105 keadaan seimbang. Penilai naskah mengerahkan kepada Hess (1989) untuk sebuah diskusi komprehensif struktur lelehan dan viskositas.

            Kebanyakan magma habis pada temperatur dalam selang waktu kristalisasi antara cairan (temperatur dimana kristal pertama muncul selama pendinginan). Dan bahan padat ( temperatur dimana magma menjadi benar-benar padat), dan demikian menghasilkan pelelehan, memadatkan kristal dan mungkin juga gelembung-gelembung gas. Kekentalannya juga bervariasi karena pengaruh tekanan. Jadi ketika mengalir lebih lanjut hanya memerlukan tenaga yang sedikit. Keadaan magma (prilakunya) sangat dipengaruhi oleh komposisi, viskositas (kekentalan), kandungan gas dan kadar pemadatan kristal. Ekstrusif rendah viskositas magma asam (basaltik, andesit) cenderung membentuk aliran magma, sebaliknya magma ekstrusif tinggi, viskositas intermediet dan felsik menghasilkan piroklastik yang besar seperti material abu, tuff atau fitrophyre. Magma asam intrusif memiliki viskositas rendah yang dapat mengeksploitasi retakan tipis sebagai pipa penyalur. Magma felsik dengan viskositas tinggi sangat jarang dijumpai sebagai “dikes” atau “shills” tetapi biasanya dalam bentuk yang besar, membulat atau bola-bola seperti stocks dan batolit.

KOMPONEN KIMIA BATUAN BEKU

            Batuan beku mempunyai banyak material yang terdiri dari unsur kimia. Dua hal penting yang merupakan dua unsur yang berlimpah-limpah di kerak bumi, oksigen dan silisium, tetapi banyak unsur lain yang menjadi penyusun di dalam magma dan batuan beku. Komponen kimia dari batuan beku umumnya terbagi menjadi 3 kategori dari komponen: unsur utama, unsur tambahan dan unsur pelengkap (unsur terkecil). Unsur utama merupakan gambaran secara khas sebagai pemusatan yang jauh lebih besar daripada 2 wt℅ dan unsur tambahan antara 0˒1 dan 2 wt℅. Unsur pendukung memiliki pemusatan kehadiran dibawah 0,1 wt℅ dan merupakan secara khas sebagai bagian per jutaan atau bagian per milyar. Dalam penjumlahan oksigen dan silisium yang berlebih meliputi aluminium, titanium, besi, mangan, magnesium, kalsium, sodium, potasium dan fosfot. Secara normal memberitahukan kelimpahan dari unsur utama dan unsur tambahan dalam bentuk oksida sederhana yaitu SiO2, TiO2, Al2O3, FeO, Fe2O3, MnO, MgO, CaO, Na2O, K2O, P2O5. Unsur utama mineral lainnya adalah sulfur, flourit dan klorit dalam bentuk unsurnya.

            Meskipun analisis batuan dan mineral umumnya merupakan persentase dari persen berat oksida, jumlah molar digunakan untuk berbagai macam tujuan petrologis. Perubahan dari persen berat ke persen mol adalah secara langsung dan penggunaan berat molekul dari oksida yang dapat di kalkulasikan dari tabel periodik atau didapatkan dalam banyak teks mineralogi antara lain: Deer, Howie dan Zussman (1993). Untuk mengkomfersi, pembagian persen berat tiap oksida oleh berat molekulnya. Penambahan semua nilainya dan kemudian dinormalkan menjadi 100%.

            Dua komponen penting kimia lainnya adalah air dan karbondioksida. Dalam batuan beku, senyawa itu terjadi pada sebuah analisis jika molekul air atau mineral karbon saling bersentuhan (dan jika bahan campuran dapat ditemukan dengan menggunakan teknik analitik). Jumlah 2 komponen utama di hancurkan di dalam magma tetapi kebanyakan untuk kehilangan kelengkapan dalam proses kristalisasi. Hal itu sangat penting untuk menginggatkan bahwa kekurangan komposisi kimia dari batuan beku.

Metode analisis kimia

Metode ini untuk menentukan komposisi kimia batuan dan mineral beragam, tersembunyi dari peralatan tradisional (dan jarang dilakukan) “ kebasahan kimia” analisis kuantitatif untuk  kecanggihan spektroskopik modern dan teknik  spektrometri massa. Penggunaan yang lebih luas menggunakan metode berikut.

Analisis kimia basa.

Analisis kimia basa adalah metode analisis kuantitatif menggunakan bahan reaksi titrasi dari produksi larutan oleh pemutusan total sampel batuan dan mineral di dalam asam. Teknik pemutusan tergantung pada tipe batuan, tetapi umumnya daerah  dengan sempurna atau sampel berbubuk merupakan penghancuran di dalam larutan asam dengan beberapa campuran nitrat, asam fosfat, dan asam hidroklorik. Penggunaan  hampir eksklusif dalam menentukan kosentrasi unsure utama dan tambahan, teknik ini tidak mempunyai resolusi yang diperlukan  untuk analisis unsur  pelengkap lainnya. Sebagian besar analisis batuan (dan baik analisis mineral) dilaporkan dalam petrologi.

Absorpsi atom spektrumfotometry

Sama halnya analisis kimia basa, teknik  absorpsi atom spektrumfotometry juga memerlukan sejumlah pemutusan sampel di dalam larutan asam. Larutan kemudian menguap di dalam lingkungan gas dan berhubungan pengeluaran pemancaran atom penjumlahan kualitatif dan perbandingan larutan standar diketahui konsentrasi larutan khusus. Teknik ini umumnya lebih teliti daripada analisis kimia basa untuk unsur utama di konsentrasi rendah, tetapi dapat lebih kurang teliti pada konsentrasi tinggi.

 Pemancaran sinar X atau spektroskopi fluorescence

Semua material pemancaran sinar X (yaitu, panjang gelombang sinar X fluoresce) dari individu unsur pokok atom ketika atom itu terbangkitkan oleh pemfokusan sumber kuat energi, didalam kotak sumber energi tinggi sinar X, pada dasarnya sinar X berupa lampu yang menggunakan difraksi sinar X, secara khas menggunakan sumber dari XRF meliputi tungsten dan emas. Unsur atom tertentu memancarkan sinar X spektrum di beberapa karakteristik intensitas frekuensi, merefleksikan secara tepat energi elektron yang berlimpah-limpah di dalam pembangkitan atom. Analisis kuantitatis melibatkan pengukuran intensitas itu untuk batuan yang tidak di ketahui dan kemudian perbandingan bahan standar pengukuran itu untuk mengetahui konsentrasi. Beberapa koreksi berdasarkan pada sifat fisik pijar sinar X dan menerapkan absorbsinya. Teknik ini menjadi suatu teknik standar analisis di tahun 1980an.

Mikrokuar Elektron ( EMP ) dan kapasitas pemancaran sinar X ( PIXE )

            Kedua teknik pokok serupa XRP, dengan 2 pengecualian: sumber pembangkit energi dan tempat analitis larutan. Teknik EMP menggunakan pemusatan sorotan energi tinggi elektron untuk membangkitkan sampel dan dapat membangkitkan area di permukaan sampel kecil sebagai1000 millimeter ( 1 mikrometer atau 1 mikro ). Teknik PIXE menggunakan energitinggi proton untuk sumber energi dan tempat pembangkitan sedikit lebih besar. Seperti XRF, kedua teknik ini menggunakan perbandingan intensitas sinar X dari material yang diketahui dan tidak diketahui konsentrasi kimia untuk sampai pada analisis kuantitatif. Pemecahan analitis menyediakan area yang sangat kecil hanya satu jenis meneral atau bagian yang rata untuk dianalisa. Faktanya, paling jauh penggunaan teknik ini untuk analisis tak lebih dari satu jenis mineral di dalam sayatan bagian tipis batuan. Elektron atau proton ukuran balok hanya dapat menyediakan perluasan analisis atau daerah polikristalin ukmuran besar. Meskipun tak maksimum, analisis batuan dengan teknik ini secara khas melibatkan pembubukan dan peleburan sampel dan meneliti hasil glass ( memerlukan perhatian khusus untuk produk homogen glass ). Analisis semua unsur utama dan beberapa unsur pendukung dapat dipelajari dengan kedua EMP dan PIXE.

Induktif plasma gabungan spektroskopi

Satu teknik yang lebih baru untuk analisis keseluruhan batuan dari unsur utama dan unsur pendukung pilihan menggunakan pembubukan/serbuk, pemecahan sampel batuan, yang menguapkan ke dalam sebuah plasma ( temperature sangat tinggi ionisasi gas). Plasma menguji dan membandingkan dengan standar penggunaan pemancaran spektroskopi. Meskipun beberapa sampel memerlukan persiapan, teknik ini akurat dan relative cepat. Keuntungan selanjutnya teknik ini adalah plasma dapat menjadi pengarah spectrometer massa dan menganalisa komposisi isotop, seperti diuraikan dibawah, di dalam bagian pada metode spektrometri.

ANALISIS INSTRUMENTAL PENGGERAKAN NEUTRON (INAA)

INAA meliputi penyinaran sampel bubuk batuan dengan menggerakan sumber high-flux neutron, khusus didalam sebuah sinkrotron atau reaktor nuklir, untuk analisis unsur pelengkap, terutama unsur yang jarang di bumi. Kehidupan pendek nuclides merupakan pembangkit dari setiap unsur selama penyinaran. Pemusatan perhatian unsur pelengkap dapat ditambahkan oleh pengawasan radiasi elektromagnetik dari α, β, dan γ tempat tinggal jangka penduk nuclides di dalam sampel.

Metode spektrometri massa

Analisis isotop untuk kestabilan kedua dan isotop radiogenic dilakukan menggunakan spectrometer massa, yang mampu mendiskriminasi atom dan partikel molekul berbeda massa. Material diperkenalkan kedalam spectrometer massa gas, plasma, atau penguapan cairan ke sebuah kawat pijar yang kemudian dipanaskan ke residu pembuat terang. Penggunaan metode selanjutnya di dalam kotak dan aliran partikel masuk ke spectrometer massa dari pemanasan kawat pijar. Isotop, atom individu unsur yang sama itu mempunyai persamaan berat atom, merupakan pemecahan massa di dalam alat dan relative tidak sama berlimpah unsur isotop dapat dihitung oleh perbandingan dengan standar kalibrasi. Teknik ini diketahui sebagai ionisasi termal spectrometri massa atau pelemahan isotop ionisasi termal spektrometri massa. Ketika sebuah unit ICP adalah sumber gas,  teknik penunjuk ke ICP-MS.

Didalam analisis spektrometri massa pemisah butiran individu material dapat menjadi pelaksana penggunaan dua tipe alat. Pertama varian ICP-MS di material (plasma) tanda jalan spectrometer massa diperoleh dari ablasi laser tempat kecil pada target, secar efektif manggali keluar lubang yang sangat kecil oleh energi tinggi laser itu material target utama mengalami penguapan. Alat ini mengetahui ablasi laser, penggabungan secara induktip spectrometer massa (LA ICPMS). Kedua di dalam teknik menggunakan sorotan focus ion (secara khas sesium atau ion oksigen) untuk secara fisik menggali lubang di dalam sampel. Penggalian material kemudian menjalankan spectrometer massa yang menganalisa. Kedua alat itu bermacam-macam penggunaan untuk analisis unsur pelengkap dan isotop dan terutama sangat berharga untuk U-Pb geochronology.

KOMPOSISI KIMIA BATUAN BEKU

            Selama sekitar 200 tahun dari penelitian analisis kandungan kimia  batuan beku ahli petrologi mengungkapkan beberapa pola pokok. Contohnya batuan beku basa seperti bassalt yang kaya akan kalsium, besi, dan magnesium dan kekurangan sodium, potasium dan silika relatif ke batuan beku asam. Mafik merupakan batuan yang kaya akan kandungan mineral feromagnesian (olivin, piroksin dan amphibol), dan felsik termasuk batuan yang kaya dengan mineral berwarna terang (kuarsa, feldspar, dan feldspatoid). Selain itu, telah digunakan pada melanocratic, artinya yang berwarna gelap, dan leucocratic, artinya yang berwarna terang. Komposisi kimia ini cenderung menggambarkan kandungan mineral dari beberapa tipe batuan (dan sebaliknya ) dan tidak dapat dielakkan bahwa hal tersebut merupakan hasil dari formasi dan evolusi magma. Le Maitre (1976) menghitung komposisi ini dari kompilasi diatas 20.000 analisis sifat kimia. Ini menjadi catatan menarik untuk mengetahui nomor dari setiap kategori dari analisa individu. Granit dan basalt menunjukkan nomor paling besar dari analisa batuan, suatu situasi mencerminkan fakta bahwa ada dua tipe batuan beku yang melimpah di lapisan kerak bumi. Untuk beberapa tipe yang lain. Tipe batuan felsik, volume batuan yang paling banyak melimpah yaitu granodiorit dan kuarsa diorit, mencernminkan perannya di magnitik kontinental yang besar sampai lingkungan batholith.

            Batuan beku jarang memiliki kandungan silika sekitar 45% atau lebih dari 75%. Mengapa bisa terjadi begitu?  Batuan beku banyak terbentuk dari mineral-mineral silika yang mana, terkecuali kuarsa, konsentrasi silika antara 35-70%.karena volume batuan yang didominasi oleh  mineral-mineral feldspar, yang kandungannya terdiri antara 55 -68%  silika, ini melebihi jumlah yang diharapkan. Keadaan rendah silika dan silika- mineral-mineral bebas ( olivin dan oksida, berturut-turut ) isian silika dari batuan basa, mengingat keberadaan dari pengaruh kuarsa pada batuan asam sangat penting. Batuan beku intermediat mengandung silika (60-65%) memperlihatkan adanya penurunan sekitar beberapa jumlah feldspar dari jumlah silika.

            Kemudian, komposisi batuan tidak berarti apa dari berat volume rata-rata dari komposisi mineral-mineral individu yang pokok. Walaupun itu bisa sedikit menjelaskan, ini mudah dan berdasarkan atas dasar hubungan antara komposisi kimia dan mineral isian dari Batuan beku ( benar, semua batuan ) , dan menjadi satu hal yang paling penting di kalkulasi petrologi, CIPW.

PENGUKURAN DAN PENAFSIRAN MINERALOGI

            Komposisi minaralogi dari batuan beku merupakan ciri penting karana digunakan untuk klasifikasi dan interprestasi dari asal evolusi magma. Sebagian besar batuan, mineralogi dapat diamati dan diukur dengan menggunakan teknik optik atau berbagai reaksi kimia untuk lainnya, termasuk banyak jenis batuan vulkanik dan khususnya berbutir gelas seperti obsirdan. Untuk batuan ini, perhitungan dari mineralogi ’sintetik’ berdasarkan komposisi kimia sangatlah penting.

BERAT DAN MODE VOLUME

            Secara langsung ukuran dari mineral batuan ini disebut mode, yang biasanya dinyatakan sebagai persen volume atau berat setiap konstituen mineral. Mode dapat diukur dengan berbagai cara , tetapi yang paling umum adalah yang paling tradisional. Penggunaan microskop mekanik untuk poin-count butiran mineral dalam sayatan tipis batuan (William,turner, dan Gillbert 1982) proses ini melibatkan pemindahan bagian tipis sistematis sepanjang kolom dan mengidentifikasi dan menghitung butir mineral pada setiap persimpangan gread, kira-kira dari 500-5000 jumlah, tergantung pada ukuran butir dan tingkat presisi yang diperlukan dinormalisasi untuk 100%. Presentase yang dihasilkan tentu saja, persen berdasarkan wilayah, tetapi dianggap aquivalement untuk persen volume batuan tekstur homogen melalui ekstrapolasi ke dimensi ke tiga, perhatikan bahwa struktur micro pada batuan ini menonjol seperti layering fine-skala atau  lenetion aliran kuat,dimensi ketiga perhitungan ini mungkin tidak valid.

            Mode semikuantitatif (pikiran,nyata) dapat ditentukan dengan estiminasi visual proposi mineral baik di bagian tipis atau di gergaji, lembaran di poles,untuk beberapa tujuan petrologi weight modes diperlukan misalnya,untuk perbandingan kandungan mineral yang sebenarnya untuk diagrafma fase diplot dengan skala persen berat.

            Modus berat dihitung dari volume dengan mengalikan volume persen mineral masing-masing dengan berat rata-rata tertentu, maka normalisasi jumlah dari nilai-nilai baru untuk 100% (karena berat jenis sama dengan berat dibagi volume, persen, berat harus sama kali volume normalisasi persen berat jenis) sangat penting untuk melihat apakah modus dilaporkan sebagai modus volume atau berat jika ini tidak ditentukan biasanya di asumsikan menjadi modus volume. Mode mineral merupakan dasar dari international akan dibahas dibawah ini.

NORMAL CIPW

            Proporsi mineral dalam batuan beku adalah properti dasar yang digunakan untuk perbandingan suatu klisifikasi dalam beberapa kasus , bagaimana pun tidak mungkin mengukur mode mineralogi dengan menggunakan teknik tradisional pada bagian tipis misalnya , untuk batuan vulkanik halus/kaca . Untuk mengukur beberapa batuan plutonik, proses kristalisasi magmatik seperti gas isi variabel atau kedalaman em placement, telah menghasilkan mineral yang tidak dapat langsung dibandingkan bahan kimianya misalnya , piroksin versus amphibol amphibol versus biotit. Untuk alasan ini , empat petrologists awal abad kadua puluh. -Whitman cross, joseph P. Iadings, louis V. Pirsson, dan harry s.washington (yang terakhir –inisial nama C.I.P.W) - Merancang skema untuk menggunakan analisis kimia dari batuan beku untuk menghitung mineralogi ideal atau hipotesis berdasarkan sejumlah aturan baku. Aturan –aturan ini memungkinkan ahli geologi untuk menghitung mineral normatik dengan mengalokasikan konstituen kimia pada suhu tinggi. Pertama untuk mengkristal minaral sebelum mineral rendah temperatur sehingga simulasi urutan sebenarnya dari krirtalisasi dari sebuah magma yg ideal (seperti dalam Bowen’s Reaction Series ) . mineral normatif anhidrit, memungkinkan batuan magmatit  hidrous dapat langsung dibedakan dengan yang kurang menggunakan komposisi batuan.

Karena kadar air tidak relevan dengan perhitungan normal CIPW, biasanya ada hubungan langsung atau jelas antara mineral modal akurat dan mineralogi normatif  untuk sebuah batuan tertentu (sebagian besar batuan plutonik mengandung setidaknya sedikit air) aljabar korespondensi antara alam (misalnya terhidrasi) dan ideal (yaitu ,anhidrat) mineral biasanya jelas. Contohnya yang tidak termasuk dalam hal ini air, unit kimia yang biotik magnesiumnya setara dengan satu unit feldspar kalium ditambah tiga unit dari enstatite piroksin , minus tiga urut kuarsa.

            KMg3ALSi3O10 (OH)2

                  Biotite

                                          = KALSi3O8 + 3MgSi3 – 3Si2 + H2O

                                          K-feldspar       enstatite    quarst     water

Demikian pula , satu unit aktionolit setara  dengan dua unit diopside ditambah tiga unit enstatite dan satu unit kuarsa.

      Ca2Mg5Si8O22 (OH)2

                  Actinolite

                                          = 2CaMgSi2O6 + 3 MgSiO3 + SiO2 + H2O

                                          Diopside                enstatite       quartz    water

Perbandingan langsung dari mineralogi modal dan normatif tidak sederhana, karena mineralogi normatif dihitung pada atom dari pada secara volumetri atau berat. Konversi mineralogi normatif ke bentuk yang kompotibel dengan baik mode volume atau berat dapat dilakukan dengan menggunakan volume molekul atau berat molekul dari mineral yang sesuai .

KLASIFIKASI MINERALOGI

Untuk setidaknya 200 tahun, petrologists telah mencoba untuk mengidentifikasi, menganalisis, karakterisasi, atau mengklasifikasikan batuan beku. Batuan beku telah diberi nama berdasarkan kandungan mineral, komposisi, lokasi geografis, atau tekstur atau ada dasar yang jelas sama sekali. Karena proliferasi nama sering tidak logis, sebuah tata-nama, sejarah besar dan membingungkan ada untuk batuan beku. Sebagian besar pendekatan ini sebelumnya telah diringkas oleh A. Johannsen (1931 1937, 1938) dalam empat volume buku berjudul A petrografi Deskriptif of the Rocks beku. Johannsen dengan kemampuan meringkas masalah:
Banyak dan khas adalah klasifikasi yang telah diusulkan untuk batuan beku. Mereka variabilitas depens di bagian atas tujuan yang ach itu intenden, dan di bagian atas kesulitan yang timbul dari karakter batu sendiri. Masalahnya tidak dengan klasifikasi tetapi dengan Alam yang tidak membuat hal-hal yang benar.
Pendekatan jelas bagi slassification objektif dan jelas dari batuan beku adalah salah satu dibangun di sekitar mineralogi dan tekstur. Kedua karakteristik menyediakan banyak informasi tentang asal dan sejarah batu, mudah dijelaskan dalam lapangan serta laboratorium. Data mineral dapat diuraikan, jika dianggap berguna, dengan analisis kimia r isotopik. Untuk batuan vulkanik mikrokristalin seperti rhyolites dan basalt, klasifikasi terbaik biasanya requirea analisis kimia.
The accurary identifikasi mineralogic dari spesimen lapangan dan tangan sangat variabel. Hal ini tergantung pada ukuran kristal, derajat perubahan, dan kualitas pengamatan oleh ahli geologi. Faktor terakhir ini sering sama pentingnya dengan yang lain, karena perbedaan kemampuan antara ahli geologi sama besarnya perbedaan ukuran kristal antara granit dan riolit. Dalam hal apapun, jelas bahwa deskripsi seperti "granit albiteriebeckite myrmekitic" tidak bisa hanya didasarkan pada pemeriksaan spesimen tangan. bagian Tipis diperlukan untuk klasifikasi yang paling akurat. Di lapangan, deskripsi seperti "basalt" untuk sebuah batu mikrokristalin berwarna gelap, atau "riolit tuf litik" untuk kepadatan, berwarna agak terang, berlapis-lapis yang mengandung fragmen, karakter ini pasti sering digunakan. Karakterisasi Mahasiswa seputar batuan akan meningkatkan berbanding lurus dengan pengetahuan mereka tentang karakteristik mineral dan asosiasi mineral dan pengalaman mereka.
batuan beku Kebanyakan hanya berisi beberapa mineral dalam kelimpahan tinggi dan lebih banyak jenis yang kecil. Karena kemudahan identifikasi dan signifikansi mereka di petrogenesis, mineral lebih umum dan berlimpah biasanya dipilih sebagai dasar klasifikasi. Batuan beku kerak paling banyak mengandung konsentrasi signifikan feldspar, bersama dengan silika mineral seperti kuarsa, atau mineral feldspathoid indikatif kekurangan silika. Batuan beku kaya mineral ini berwarna terang biasanya disebut sebagai felsic atau leucoracratic. Sebaliknya, batuan beku banyak mengandung mierals ferromagnesian melimpah dan sebagian besar berwarna gelap seperti pyroxenes, Amfibol, olivin, dan biotit di samping mineral berwarna terang. batuan tersebut umumnya digolongkan sebagai intermediate atau mesocratic. Ketika mineral berwarna gelap mendominasi, batu disebut mafik atau melanocratic. Jika mineral gelap hampir hanya konstituen, batuan ultrabasa id. Batuan beku jenis anorthosite sebagian besar terdiri dari plagioklas dan dengan demikian cahaya berwarna, atau leucocratic, dan sebagian besar terdiri dunit olivin dan dengan demikian adalah cahaya menjadi hijau menengah. Anorthosite adalah notregarded sebagai batuan felsic, bagaimanapun, tetapi paling sering dikaitkan dengan batu mafik gabro.
Kebanyakan klasifikasi batuan beku berdasarkan jumlah elative terang dan mineral gelap dan pada ukuran butir, yang mencerminkan laju pendinginan dan cara emplacement. Beberapa skema klasifikasi telah tergantung pada indeks warna socalled, atau CI (lihat di bawah), yang pada dasarnya adalah gelap skala dari 0% (putih) hingga 100% (hitam) sekitar berdasarkan persentase mineral gelap di batu. Istilah leucocratic, mesocratic, dan melanocratic sebenarnya bisa diukur dengan skala CI.


Sistem Klasifikasi IUGS

Untuk memenuhi kebutuhan b = untuk sistem klasifikasi tunggal rasional batuan beku untuk digunakan dunia, Albert Streckeisen menerbitkan sebuah skema klasifikasi yang berlaku umum awal untuk batuan plutonik pada tahun 1967. Uni Internasional Ilmu Geologi kemudian membentuk komisi ahli geologi dari seluruh dunia, dipimpin oleh Streckeisen, untuk menguraikan dan merumuskan proposal untuk batuan plutonick, kemudian menambahkan sistem Klasifikasi terhadap batuan volkanik. Sistem ini sekarang, diterima secara internasional yang komprehensif untuk klasifikasi batuan beku yang memungkinkan aplikasi tata-nama, ke tingkat yang diinginkan presisi dan khususnya konsistensi. Paling penting, mengingat penentuan mineralogic akurat oleh penyidik individu, ini memungkinkan seorang pembaca sastra geologi untuk memastikan bahwa granodiorit hornblende-biotit di Rusia adalah jenis batuan yang sama sebagai granodiorit hornblende-biotit dijelaskan dari Texas. Rekomendasi IUGS telah diterbitkan dalam bentuk buku bersama dengan glosarry istilah, dan pembaca disebut buku ini untuk rincian lebih lanjut classificatio dan tata-nama.
Untuk mengklasifikasikan batu dengan benar berdasarkan komposisi mineral, salah satu harus menentukan persentase lima mineral: quatrz, plagioklas, feldspar alkali, mineral ferromagnessian, dan feldspathoids. Dalam pemeriksaan tangan spesimen, kuarsa diidentifikasi dengan tembus nya, kilap vitreous, dan kurangnya pembelahan jelas; plagioklas dengan pemutusan dan stiations kembar polysynthetic pada permukaan disosiasinya; feldspar potasik oleh perpecahan, kurangnya striations kembar, dan pink warna umum untuk tan ; ferromagnesian mineral dengan warna coklat, hijau, atau hitam, dan feldspathoids oleh individu yang meliputi karakteristik prossessed oleh masing-masing. Para feldspathoid hanya sulit untuk mengenali adalah nepheline, yang dapat dengan mudah keliru untuk kuarsa dalam pekerjaan tangan spesimen. Pada bagian tipis, diskriminasi kuarsa dan nepheline cukup mudah karena mereka memiliki sifat qite optik yang berbeda.
Seperti disebutkan sebelumnya, mineralogi batu mencerminkan komposisi kimianya. Sebuah klasifikasi batuan beku berdasarkan mineralogi karena itu merupakan cerminan langsung dari komposisi magma Klasifikasi IUGS membedakan batu umum berdasarkan ukuran butir. batuan Phaneritic diklasifikasikan sebagai plutonik, dan aphanitic batuan diklasifikasikan sebagai Withi vulkanik masing-masing kategori besar, batuan dinamai berdasarkan persentase mineral. Kategori Klasifikasi pasti berdasarkan penggunaan umum dan kelimpahan pengelompokan mineralogic alam. Oleh karena itu sebagian besar batu.

Felsic dan Rocks Batuan

Petrologists biasanya menggunakan diagram segitiga. Prosedur geometris sederhana untuk mencari atau ploting titik individu dalam setiap diagram segitiga. Dalam segitiga sama sisi, skema lokasi sederhana melibatkan hanya membagi segitiga menjadi segitiga sama sisi yang lebih kecil yang membentuk kotak segitiga. Untuk semua segitiga, termasuk segitiga nonequilateral yang menciptakan grid biasa tidak praktis, pendekatan matematis elegan adalah untuk membangun sebuah elevasi dari sudut masing-masing melalui titik ke sisi berlawanan dari segitiga. Jarak dari masing-masing pihak ke titik, dibagi dengan seluruh panjang elevasi itu, memberikan sebagian kecil dari konstituen di pojok itu. Classificationtechnique melibatkan penentuan percentagess volumetrik dari masing-masing A, P, dan Q atau Fconstituents, bersama dengan jumlah dan jenis konstituen mafik. Jika sebuah batu plutonik berisi hanya sebagian kecil dari mineral mafik tidak ditentukan, ini bisa ditunjukkan dengan leuco awalan (seperti di leucogranite); dengan proporsi besar mineral mafik, nama hte harus diawali oleh mela (seperti dalam melagranite). Pada kesempatan langka, granit mungkin lapangan class0ified sebagai melagranite bahkan jika tidak memiliki kandungan yang luar biasa tinggi mineral mafik selama warna gelap. Hal ini dapat mengakibatkan dari kehadiran feldspar potassik hijau gelap-coklat yang terjadi di beberapa granit luar biasa ferroan dan syenites. Penggunaan penuh diagram clssification QAPF membutuhkan pengetahuan yang cukup tepat mode mineral, informasi yang biasanya diperoleh melalui pemeriksaan laboratorium ptrographic dengan mikroskop. Ketika persentase mineral yang tepat tidak dapat ditentukan, misalnya, selama pemetaan lapangan atau pemeriksaan spesimen tangan rutin, nama grup umum yang umum digunakan.
Setiap daerah plagioklas-kaya diagram QAPF berisi jenis batu dua atau lebih yang namanya tergantung pada isi anorthite dari plagioklas atau kelimpahan atau identitas mineral. Anothosite adalah batuan plutonik yang berisi lebih dari 90% dan mineral plagioklas mafik kurang dari 10%. Perbedaan antara gabro dan diorite dapat didasarkan pada kriteria baik. The plagioklas dalam gabro umumnya memiliki komposisi lebih yg mengandung kapur dari Sebuah ₀ ₅, sedangkan plagioklas dalam diorit kurang yg mengandung kapur dari Sebuah ₀ ₅. Jika sebuah batu yang harus diklasifikasikan dengan cara spesimen tangan saja, perbedaan antara gabro dan diorite jelas tidak dapat didasarkan pada komposisi plagioklas, dan proporsi mineral mafik dapat digunakan sebagai gantinya. Bagro biasanya berisi lebih dari 35% dengan volume mafik mineral olivin, augit, atau ortopyroxene. Diorit biasanya mengandung mineral mafik kurang dari 35% menurut volume dan umumnya mengandung hornblende dan juga, atau bukan, piroksen, namun sebenarnya ada batuan transisi disebut gabro-batuan diorit atau andesit basaltik untuk membingungkan hal-hal yang sedikit. Sebuah pembagian lebih lanjut dari batuan berdasarkan gabroric di mana mineral mafik hadir di samping plagioklas yg mengandung kapur. Gabro berisi clinopyroxene, norite berisi ortopyroxene, dan gabronorite memiliki proporsi hampir sama dari orthopyroxene dan clinopyroxene. Klasifikasi batuan plagioklas-kaya diringkas. Sebuah prinsip penting untuk menjaga pikiran selalu adalah bahwa meskipun taat kepada aturan yang dianjurkan, terkadang requirea komunikasi yang paling unambigous akal sehat bila aturan ambigous.
Ultrabasa Rocks. batuan ultrabasa hampir selalu phaneritic dan kabut Q + A + P + F isi kurang dari 10%, yaitu mineral mafik membuat lebih dari 90% dari batu. Mineral mafik utama dalam batuan ultramfic adalah olivin magnsian, augit, orthopyroxene, dan hornblende. Berbagai macam mineral kecil bisa terjadi, namun yang paling umum yang alumina atau chromian, spinel, magnetit, ilmenit, garnet, phlogopite, dan clacic plagioklas. Biasanya jarang hornblende bantalan dan batuan ultramafik lebih umum hornblende-bebas memiliki sedikit tumpang tindih, dan sebagian besar varietas hornblende-bearing hanya disebut hornblendites, yang nama mineral kecil yang digunakan sebagai kualifikasi, misalnya, hornblendite garnet. Oleh batuan ultramafik jauh yang paling umum adalah yang didominasi oleh olivin dan pyroxenes, dan skema klasifikasi mereka menggunakan diagram segitiga dengan olivin, clinopyroxene yg mengandung kapur, dan orthopyroxene di sudut-sudut. Varietas yang lebih umum ditemui atau nama adalah: Peridotit: istilah umum atau bidang untuk sebuah batu yang berisi 40 sampai olivin 100%, dengan sebagian besar sisanya piroksen.
Lherzolite: tipe batuan sangat penting, yang telah dipostulatkan untuk membentuk sebagian besar dari mantel bumi. Letherzolite, bernama untuk kejadian-kejadian yang tidak-nya badan ultrabasa di Lherz di Pyrenees Perancis, adalah sebuah batu olivin yang kaya dengan orthopyroxene substansial dan piroksen yg mengandung kapur kecil kromium-bearing. Umumnya mengandung mineral alumina kecil, baik Al chromian-spinel atau garnet.
Harsburgit: istilah khusus untuk rock, olivin kaya olivin-orthopyroxene. Kebanyakan comonly berisi crhromian Al-spinel sebagai mineral minor atau aksesori, walaupun garnet juga dapat terjadi.
Dunit: Sebuah Peridotit berisi 90 sampai olivin 100%, dengan sebagian besar terdiri dari sisa piroksen dan chromian Al-spinel.
Websterite: Dinamakan untuk lokalitas jenisnya di Webster, North Carolina, batu ini adalah sebagian besar terdiri piroksenit proporsi orthopyroxene dan clinopyroxene, dengan sisanya kecil baik olivin dan hornblenda.
Kimberlite: rock Langka ultramfic porfiritik dengan kelebihan kalium, sehingga mengandung phlogopite atau fenokris amphibole potasik, sehingga secara efektif sebuah Peridotit mika. Berisi olivin, phlogopite, pyroxenes, dan kromit. mineral acccesory Karakteristik adalah monticellite, garnet kaya magnesium, dan mineral kaya titanium. Beberapa kimberlites mengandung berlian.
Lamproites: Serupa dengan kimberlites dan lamprophyres, lamproites adalah ultramafik dan secara molar memiliki total alkali lebih dari alumina, membuat mereka peralkaline. Lamproites dapat terjadi sebagai aliran sayang. Mereka umumnya mengandung mineral langka, incluiding berlian.
 

Batuan vulkanik dapat dalam beberapa kasus diberi nama berdasarkan diagram yang mirip dengan thst digunakan untuk batuan plutonik. Namun, sifat fine-grained dari batuan vulkanik membuat klasifikasi berdasarkan mineralogi mdal sulit pada umumnya dan tidak mungkin untuk batuan gelas. Oleh karena itu, batuan volkanik lebih akurat diklasifikasikan menggunakan kriteria kimia. Perbedaan antara basal dan andesit dibuat terutama berdasarkan indeks warna (CI) dan konten silika atau kurang akurat, o dasar komposisi plagioklas. composotion Plagioklas dalam batuan vulkanik banyak sulit untuk digunakan sebagai kriteria karena kehadiran yang sangat umum zonasi komposisi yang kuat, bahkan zonimg berosilasi, dalam kristal, membuat penentuan kadar anorthite massal sulit. Basal, yang jatuh dekat sudut segitiga P, dibagi menjadi tholeite, tholeite olivin, basal tinggi Al, dan basal alkali. Membedakan mineralogic dan karakteristik kimia dari jenis ini sangat penting basal. Perhatikan bahwa banyak basalta tidak dapat ditempatkan dalam kategori tertentu dengan analisis mikroskopis tetapi memerlukan analisis kimia atau perhitungan normatif mineral. Batuan vulkanik umum feldsphatoid-bantalan bisa kaya dalam alkali feldspar, plagioklas (tephrite), dan nepheline atau leucite (naphelinite atau leucitite).

ASPEK-ASPEK LAIN DARI KLASIFIKASI

Sebenarnya, klasifikasi IUGS tidak memasukkan tekstur kedalam skema klasifikasi itu, dimana tampak jelas perbedaan antara batuan faneritik dan afanitik. Biar bagaimanapun, penamaan batuan beku didasarkan pada kriteria teksturnya, dengan beberapa kandungan mineral sekunder, mencakup:

Pengmatite: sangat kasar-butir batuan (ukuran butirannya lebih besar daripada 1cm dan mendekati atau lebih dari 1m) hubungan butiran batuannya. Komposisinya adalah type granitik, dan pegmatit biasanya banyak mengandung mineral alkali feldspar ( albit atau kelas sodium plagioklas ditambah microcline) dan kristal-kristal kuarsa.

Klasifikasi batuan beku

Untuk setidaknya 200 tahun, ahli batuan telah mencoba untuk mengidentifikasi, menganalisis, mengkarakteristik, atau mengklasifikasikan batuan beku. Pemberian nama batuan beku didasarkan pada kandungan mineral, komposisi, lokasi geografis, atau teksturnya. karena proliferasi penamaan biasanya tidak logis, maka sebuah tatanama merupakan sejarah besar  untuk batuan beku agar tidak membingungkan penamaanya. Sebagian besar pendekatan ini sebelumnya telah diringkas oleh A. Johannsen (1931,1937,1938) dalam empat volum nya-set buku berjudul “ deskriptif petrografi dari batuan beku”. Johannsen meringkas masalah :

batuan beku banyak dan mempunyai sifat yang khas. keanekaragamannya tergantung dari bagian yang timbul dari karakter batu itu sendiri. Masalahnya ialah tidak dengan pengklasifikasian namun dengan alam yang tidak membuat hal-hal yang benar (1931,51).

Pendekatan yang jelas terhadap tujuan klasifikasi batuan beku adalah salah satunya tekstur batuan dan mineralogy. Kedua karakteristik ini menyediakan banyak informasi tentang asal dan sejarah batu, dapat dijelaskan dalam lapangan serta laboratorium, dan bagian yang halus dan tipis daripada batuan dapat diperoleh dari hasil penelitian selanjutnya. Data mineral dapat diuraikan, jika digunakan untuk penelitian kimia atau analisis isotopik. Untuk batuan vulkanik mikrokristalin seperti rhyolites dan basal, untuk memperoleh klasifikasi terbaik biasanya diperlukan analisis kimia.

Halaman 65-70

Klasifikasi kimia

Varietas besar klasifikasi kimia dari batuan beku proposed ,keunggulan beberapa dari keseluruhan analisis kimia dari jenis batuan dan yang lainnya hanya bagian dari batuan kimia.banyak dari klasifikasi ini yang digunakan oleh para  ahli dalam bidang skema genetika batuan serta dapat juga dipelajari di luar bagian dari buku. Sebagian  besar klasifikasi kimia akan sering digunakan  oleh pelajar ilmu batuan.

Penjenuhan silica.

Karena batuan beku berasal dari kristalisasi atau pendinginan magma, maka secara logika ialah bahwa magma adalah cairan silica dengan hasil penelitian mineral. Ketetapan kimianya secara detail ialah tidak mengandung  cairan biasa namun  lebih dari jumlah penelitian komponen larutan yang mempunyai temparatur dan tekanan. Pendinginan magma menjadi jenuh dengan hasil penelitian komponen mineral, kemudian mineral dari Kristal mengalami pengendapan dan menjadi bagian dari hasil batuan. Konsep yang digunakan secara umum untuk silica dan mineral silica ( biasanya kuarsa ) berbagai macam varietas batuan beku adalah basalt dan sifatnya seperti granit. Deskripsi dari silica menjadi penjenuhan silica jika mengandung kuarsa atau sejenis dengan mineral silica. Silica- bawah penjenuhan jika  jelas tidak  mengandung mineral seperti feldspar dan magnesium olivine yang keberadaanya tidak tampak pada kuarsa. Banyaknya magma yang kurang mengandung mineral silica diakibatkan oleh cepatnya mineral tersebut mengalami pendinginan, namun ada juga yang cenderung mengalami penjenuhan dengan silica karena proses kristalisasi magma yang cukup lambat. Seperti kondisi dibawah ini, cepatnya butiran olivin yang berkombinasi dengan silica dan  menjadi ortopiroksen .

MgSiO4 + SiO2 = 2 MgSiO3

Felspar dapat juga berkombinasi dengan silica menjadi feldspar , sebagai contohnya

KALSi2O6  + SiO2 = KALSi3O8

Batuan beku banyak yang kurang mengandung kuarsa silica menjadi mineral seperti nevelin olivine yang dimana dari silica menjadi jenuh . meskipun  bentuknya terlihat lemah kemungkinan silica  menjadi  batuan jenuh sangat jarang ditemukan. Mineral seperti hornblende dapat menyamarkan dengan sederhana tingkatan dari jdibawah penjenuhan silica. Dengan jumlah yang kecil dari jenis kuarsa atau feldspar dapat menjadi lebih sulit untuk di identifikasi oleh ukuran tangan maupun dengan menggunakan mikroskop.

Penjenuhan alumina

Granit menurut IUGS system klasifikasi batuan yang mengandung antara 20 dan 60 % kuarsa dan alkali feldspar lebih dari 35 % dari jumlah feldspar. Seorang ahli batuan yang menemukan batuan magma juga mesti  mempelajari kriteria mineralnya dimana variable kimianya mengandung Al2O3 , menggambarkan dari sedikitnya kandungan mineral asesoris. Sebagai contoh granit yang dimana kandungan Al2O3nya  tinggi yang mengandung mineral aluminium contohnya garnet ataupun muskovit . sedangkan granit rendah pada Al2O3 mengandung mineral sodic seperti riebectike atau egerine-augite . alumina mengandung granit yang secara langsung mengawasi karakter dan tipe Kristal yang melebur membentuk  magma granit . jadi dasar klasifikasi granit adalah Al2O3. Meskipun banyak tipe penggunaan dari granit. Skema klasifikasinya dapat digunakan pada batuan beku. ( riyolith ). Klasifikasi ini banyak mempengaruhi jumlah alumina, alkali, dan kalsium dari CIPW mineral normal. Dari referensi penelitian perbandingan dari alumina menjadi alkali ditambah kalsium di dalam feldspar.

Granit dari cairan magma tidak akan membentuk Kristal muskovit. Sebagai contoh tidak sedikit dari jumlah Al2O3 melampaui jumlah dari Na2O + K2O + CaO. ( artinya bagian dari ketentuan kimia yang terjumlah persentase molekul dari persentase berat). Seperti granit yang mempunyai alumina lebih di butuhkan untuk membuat feldspar dan juga referensi terhadap peraluminius. Jika molar AL2O3 < Na2O + K2O,  kemudian melampaui dari alkalis ( atau kekuatan dari alumina) kemungkinan hasil dari penjelasan yang kaya akan sodium dan besi FERRIC menghubungkan mineral seperti aegerin menjadi augit atau ampibol sodic . granit disebut  juga dengan peralkalin . jika alumina melampaui alkali, maka Na2O  + K2O  < AL2O3 < Na2O + K2O  + CaO, Kemudian akan menghasilkan mineral muskovit  ferromagnesian sodik dan granit juga sama dengan metaluminous ( yang berarti pertengahan ).

Klasifikasi kimia dari batuan vulkanik

Pada IUGS batuan vulkanik diklasifikasikan berdasarkan kandungan silica ( wt % SiO2 ) dan jumlah dari alkalis  ( wt % Na2O + K2O ). Catatan jelas ( e.g, antara basanit dan tephrit )  juga membutuhkan jumlah dari CIPW.

Kecendrungan kimia

Sederetan jumlah batuan beku adalah dibatasi oleh geografik yang panjang ditunjukkan oleh bagian kimia yang sangat penting. Untuk menguraikan hubungan kemungkinan genetic  perbedaan antara magma dan perbandingan antara fasilitas batuan beku yang berbeda. Ahli batuan beku menguraikan nomor kimia dan mineraloginya serta skema grafik yang menjelaskan tentang batuan beku. Diharapkan pengukurannya didasarkan pada mineralogy. Yang digunakan untuk menaksirkan ilmu mineral ( seperti CIPW), dan yag lainnya digunakan sebagai perbandingan dari komponen kimia. Pada umumnya tipe diagram ini disebut juga dengan diagram variasi. Dimana jumlahnya cukup besar dari data kimia yang menunjukkan efek kemungkinan magma kimia yang berpindah selama Kristal magma berevolusi.

Diagram harker

Satu dari sebagaian besar yang digunakan oleh semua variasi diagram adalah tipe diagram harker yang menunjukkan berat persen dari banyaknya oksida yang berfungsi pada berat salah 1 oksida. Berat persen SiO2 pada umumnya berasal dari absis, karena indicator ini sangat digunakan dalam evolusi magma yang menunjukkan genetic magma yang menurun menjadi  pasangan tunggal.yang lebih sederhana pada umumnya mengandung sedikit silica dan kemudian mengalami kenaikan dari salah satunya. Untuk pecahan magma olivine, MgO mengalami penurunan magma dimana mengalami pecahan dan juga digunakan sementara pada kandungan silica. Sebab mineral yang terlalu cepat mengalami kristalisasi magma selama pecahan  ( salah satunya adalah reaksi bowen series ) dengan tipe yang kaya akan Mg dan rendah silica relative mengalami keterlambatan, perpindahan Kristal yang begitu cepat ( pecah ) dengan demikian hasilnya sangat kaya akan silica ( MgO ) oksida umumnya berkorelasi dengan gaya linear dari proses pecahan.

Seperti variasi diagram harker yang mengandung oksida dengan absis yang sangat panjang oleh parameter yang ditunjukkan cukup bervariasi meskipun hanya deretan batuan beku. Banyaknya kandungan silica pada variasi diagram serentak dengan berpindahnya piroksen, plagioklas, dan olivine yang berbeda dengan deretan sifat basalt yang tidak banyak juga mengandung SiO2 daripada magma itu sendiri karena mineral yang mempunyai sedikit silica sama halnya dengan magma. Alasan ini dengan catatan mengandung MgO pada umumnya digunakan untuk absis basalt dan andesit karena kekuatannya dalam mengontrol pecahan yang berpindah dari olivine dan piroksen secara bersama – sama. Korelasi atau kecenderungan diagram harker pada proses terjadinya batuan beku yang berelasi bersama dengan magma. Sebagai contoh penghancuran dan pencampuran, secara keseluruhan kecenderungan korelasi linear di indikasi oleh evolusi kimia yang genetic magma dan juga dapat digunakan untuk memperkirakan proses ilmu alam secara kuantitatif.

Diagram AFM ( atau FMA )

Diagram AFM merupakan tipe lain dari berbagai variasi diagram yang digunakan dalam proses pemecahan. Berbagai oksida

Na2O + K2O (A), FeO + Fe2O3 (F ) DAN MgO (M) dalam menentukan sudut diagram  triangular. Kecenderungan genetic dari diagram AFM baik olivine ataupun klinopiroksen yang berpindah dan mengontrol evolusi magma. Diagram ini menggambarkan observasi yang bersamaan dari kedua parameter pecahannya :perbandingan   Fe/Mg dengan total kandungan alkali. Diagram AFM khususnya digunakan untuk menaksirkan evolusi kecenderungan magma dalam lingkungan lempeng konvergen.

Perbedaan indeks

Perbedaan ukuran indeks atau total evolusi magma. Dari jumlah mineral normative diatas (atau dengan temparatur yang rendah) yang ada pada bowen’s reaction series : ortoklas, albit dan kuarsa. Karena dasar mineral normal mempunyai presentasi dengan jarak dari 0-100. Secara umum basalt dengan indeks differensiasi rendah ( <25 75.="" biasanya="" dimana="" granit="" melampaui="" pembelahannya="" span="">

Indeks alkali kapur

Deretan genetic batuan beku, secara total mengalami kenaikan  alkali oksida, dan CaO yang mengalami penurunan dengan kenaikan kandungan SiO2. Variasi diagram harker dimana kandungan keduanya Na2O + K2O dan CaO yang ditunjukkan oleh kandungan SiO2 dari kecendrungan kedua linear tersebut mempunyai point yang penting dimana wt% (Na2O + K2O ) = Wt% CaO. Harga dari wt% SiO2 yang disebut dengan indeks alkali kapur. Secara umum, deretan batuan beku berasal dari alkali yang kaya ( serie alkali ) dengan harga rendah dari indeks alkali kapur ( < 50 ) dan juga alkali rendah ( seri kalsik ) yang mempunyai bidang belahan yang tinggi.

Indeks Larsen

 Indeks Larsen dikembangkan untuk digunakan dalam diagram harker di mana hanya  mengandung SiO2 saja namun tidak menunjukkan perubahan yang banyak dalam deretan  basalt piroksen dan plagioklas karena dikeluarkan bersama dengan olivin didefinisikan sebagai
1 / 3 SiO2 + K2O - (FeO + MgO + CaO), indeks larsen menggantikan kandungan SiO2 sendiri dimana absis dari sebuah diagram harker juga menghasilkan pergeseran yang berlebihan dalam kecenderungan berlebih pada fraksinasi awal deretan  basalt.

Asimilasi dan kristalisasi fraksional (afc)

 Sekarang banyak diperkenalkan oleh ahli batuan yaitu batuan beku kimia yang  terjadi di dalam dapur magma dari proses tersebut terjadi  pelelehan  dengan kristalisasi magma kemudian ditambah lagi dengan ectraction selektif dari dinding dapur magma  atau xenoliths (lihat Bab 6 untuk penjelasan yang lebih  rinci), depaolo (1981)  telah mengembangkan satu set persamaan serta menentukan  untuk menilai dampak asimilasi dan proses kristalisasi fraksional untuk kedua isotopik dan elemen sekender komposisi magma.

Ringkasan

batuan beku yang terdiri dari mineral (kecuali untuk varietas langka seperti obsidian gelas) dan identitas serta proporsi relatif dari mineral ini tergantung pada komposisi yang dominan  digunakan oleh ahli batuan untuk menandai semua aspek komposisi batuan beku dari semua batuan  konsentrasi elemen utama, minor dan sekunder  komposisi mineral individu dan yang akhirnya menjadi perbandingan yang stabil dan isotop radiogenética. semua informasi kimia sangat penting untuk menguji hipotesis dan mengetahui asal dan pemadatan magma. bagaimana banyaknya pengamatan batuan beku yang tidak memerlukan teknik laboratorium canggih namun masih menghasilkan informasi penting bagi karakteristik batuan magmatic. misalnya, perkiraan atau pengukuran proporsi mineral di dalam batuan beku merupakan kebutuhan seorang ahli batuan untuk mengklasifikasikan batu dan menggunakan diagram fasa untuk menjelaskan kristalisasi dari batuan dan menaksirkan dari puluhan ribu analisis kimia batuan beku yang diperoleh selama setengah abad terakhir mengungkapkan bahwa secara kimia batuan kurang beragam dari berbagai batuan beku (warna, ukuran butiran, mineralogi, dll) Sebagai contoh, hampir semua batuan beku mempunyai konsentrasi silika antara 45 dan 75wt%. Peran dominan feldspar dalam batuan beku yang paling mengontrol relatif keseragaman kimia .

 Tujuan klasifikasi  merupakan bagian penting dari petrologi batuan beku karena standarisasi pada penggunaan nama dan ahli geologi juga memastikan bahwa keduanya  merupakan  masing-masing  granit  jenis batu yang sama. Komisi yang bekerja di bawah otorisasi dari serikat internasional ilmu geologi telah mengembangkan klasifikasi standar untuk sebagian besar ukuran batuan. Banyaknya batuan beku serta kandungan mineral dasar klasifikasi dan skema penamaan baik untuk batuan plutonik dan vulkanik yang didominasi oleh kuarsa, feldspar, dan yang bersifat felspar  untuk batuan ultrabasa. kandungan mineral umumnya diukur dengan menggunakan bagian tipis dan komposisi petrografi microscope.informasi dapat digunakan untuk menciptakan sebuah "mineralogi sintetik" untuk semua batuan beku oleh  perhitungan CIPW norma, yang partisi kandungan batu kimia menjadi teknik mineral. secara umum hipotetis terutama berharga ketika sebuah batu (misalnya batu vulkanik) yang berbutiran sangat  halus bahwa pengukuran optik tidak pasti meskipun mereka memainkan kondisi yang kecil dalam klasifikasi IUGS, tekstur atau karakteristik komposisi khusus juga dapat digunakan untuk penamaan batuan. Berbagai macam diagram khusus indeks telah dikembangkan oleh ahli batuan untuk memanfaatkan data kimia dan mineralogi batuan beku untuk melacak proses genetik, terutama fraksinasi.

          We are Google. Resistance is futile. You will be data mined.   
I just read an article about Google on the Financial Times Website. What peaked my interests were these quotes from Google’s CEO Eric Schmidt: We are very early in the total information we have within Google. The algorithms will get better and we will get better at personalisation. The goal is to enable Google users to be able [...]
          University of Leuphana   
  University of Leuphana located in Lüneburg, Germany will visit AUT Next Wednesday. The meeting will be the start of new cooperation specifically in the field of Data Mining.  
          Search Doesn't Work: Story 2   

Search Doesn't Work: Story 2: NLP means many things. To me it means Natural Language Processing. To others it means neurolinguistic programming. When I search for the bare term 'nlp' in Google, I just get results with the second sense - same for other search engines. If I search for 'William Cohen', the first result on Google is for my friend Prof. William Cohen and the second for the other chap. [...] So why don't I get this for NLP? Why no mixture of results? [... ] Word sense disambiguation is a core requirement for a search engine. The problem - the same text having more than on meaning - can certainly be reduced by the user. However, it seems that there is a great amount of scope that could be explored on the interface side. Google is definitely aware of the problem, which is why results for ambiguous names produce multiple sense results pages, but they (and the other major engines) are way behind systems like Vivisimo's Clusty which produces appropriate results for the NLP problem. (Via Data Mining.)

The problem is that word-sense disambiguation is hard. The Clusty results for "nlp" are a tangle. They get one "natural language" cluster in the middle of a bunch of "neuro-linguistic" clusters, and it's not easy to tease them apart. Overall, Clusty's interface is way too busy, and likely to confuse for all but the most easily disambiguated queries. For example, with my favorite query "transducer", none of the clusters on the first screen are for transducer in the sense of automata theory, even though the second search result is a Wikipedia page for that sense of transducer, while the first search result is the Wikipedia page for the electrical engineering sense of transducer.

One might expect a sense-aware search engine to exploit Wikipedia to recognize alternative senses. Clusty doesn't seem to. I don't know how Clusty works in detail, but the problem is that recognizing alternative senses seems obvious in retrospect but it is hugely difficult to do from scratch, because we don't know what information sources and similarity measures will work in general, rather than in hindsight for a particular case.

Proper name disambiguation is much easier than general disambiguation. It is defensible for a search engine to focus on a limited set of classes that can be disambiguated reliably instead of trying to do the whole job, badly.


          (SGP) Project Managers – Clearance (RCS K) (Singapore / Malaysia)   
42215 **Work Location : Singapore / Malaysia (Contract will be based on candidate's location) - 2 Openings** **Overall Role Purpose** + To manage and deliver IT projects from build to deployment, within agreed time, cost and quality + To lead virtual teams and IT vendors during project lifecycle + To transition hosting, support and service management of new capabilities post-development + To maintain existing solutions depending on new or changed business requirements **Accountabilities** Customers, Stakeholders and People Management + Build effective relations with business functions + Lead and manage virtual IT teams and external IT vendors for the duration of projects Project and Process + Ensure that solutions are delivered to agreed budget, scope and timelines, in all project phases + Lead during systems requirements, analysis, design, development, and implementation of solutions + Adhere to Express standards including Architecture, security, development and process **Requirements** + Degree in Computer Science, Information Technology, Business Administration or related fields + Total of 5 to 7 years work experience in development and project management, with strong systems analysis, design, programming, testing or implementation skills + Exposure to: + Service-oriented, event-driven architecture and technologies (e.g. Software AG WebMethods products ESB, BPM, Rules Engine, Complex Events Processing; WebSphere, MQ; Java) + Cognitive technologies for data mining, machine learning, advanced analytics, cognitive analysis, natural language computing and predictive analysis + Operational or express logistics knowledge with preferable exposure in Clearance processes, systems and data + Passion for technology + Good communicator, team player, positive and can-do attitude + Able to work independently (e.g. individual contributor) as well as in a team (e.g. group contributor)
          (USA-WI-Oak Creek) Supply Chain Generalist   
PPG: We protect and beautify the world. At PPG, we work every day to develop and deliver the paints, coatings and materials that our customers have trusted for more than 130 years. Through dedication and creativity, we solve our customers’ biggest challenges, collaborating closely to find the right path forward. With headquarters in Pittsburgh, we operate and innovate in more than 70 countries. We serve customers in construction, consumer products, industrial and transportation markets and aftermarkets. To learn more, visit www.ppg.com and follow @ PPG on Twitter. Why join us: With PPG, you will find meaning in your work every day, and engage in opportunities that will shape you, personally and professionally. * Your personal strengths will empower you to succeed and make an impact from day one. * You will be inspired to learn and grow, and to get the support you need to identify and achieve your boldest career aspirations. * Your passion to excel will be fueled by your connection to world-class partners, industry experts, the best and brightest colleagues, and future forward technologies. * Your contributions will not only meet the challenges of our global customers, but help theme propel their industries forward. * You will be welcomed into a culture where everyone’s ideas and contributions are valued and encouraged. Just like you, we are driven to make a difference in our world. * Key Responsibilities * * * The Supply Chain Generalist will be responsible for actively supporting company goals in the analysis of business operations that minimize company cost, maximize process efficiencies, and improve the end to end supply chain. This position will engage in cross departmental communication in order to provide analytical support for continuous supply chain improvement. The generalist will be a key support role for analyzing business results and tracking key targets for the Supply Chain Department. Primary responsibilities will include supply chain scorecard analysis, data mining, data analytics, optimization project management and supply chain continuous improvement. The Generalist is also expected to be efficient and drive continuous improvement in the primary systems and tools that support supply chain infrastructure. This includes having a broad view of the supply chain functions interactions to support the implementation of the Oracle ERP system. Gaining a general understanding of Oracle ERP and the tasks related to the various supply chain functions is important. The Supply Chain Generalist gathers data and conducts analysis with the goal of improving the organization's supply chain operations. Identifies underperforming areas in the supply chain and may suggest improvements or resolutions to problems. Familiar with a variety of the field's concepts, practices, and procedures. Relies on experience and judgment to plan and accomplish goals. Performs a variety of complicated tasks. A wide degree of creativity and latitude is expected. / / He/She interacts with Procurement, Logistics, Inventory Control, Planning, Scheduling, Production Personnel, Customer Service, Accounting, Technical and Sales Personnel. The incumbent must communicate with all levels of Management, and have accomplished interpersonal and negotiating skills to allow for effective interaction with all of these groups. The incumbent should be team oriented, and be committed to process improvement and the implementation of the Oracle ERP system by providing excellent service to internal and external customers. The incumbent must be able to carry out his/her job responsibilities without close supervision and conduct oneself as a professional. Minimum Required Qualifications: * Bachelor’s Degree in a Business or Technical field, supply chain preferred . * Working knowledge of ERP and PC applications (Excel, Access, and Word). * Ability to effectively communicate across functions at the Oak Creek Plant. * Acting with a sense of urgency and a proactive approach that is results focused. * Ability to operate and execute effectively in a dynamic, fast-paced environment with multiple priorities and challenging deadlines. * Advanced planning, analytical capability, and complex problem solving; able to identify key issues and effectively coordinate efforts to achieve resolution. * Demonstrated teamwork and team building skills in producing results and meeting organizational objectives. * The Supply Chain Generalist must be analytical and detail oriented and have the ability for follow up. Required Skills: * Works well under pressure. * Exhibit and act with integrity by maintaining compliance with internal policies, procedures and all regulatory and governmental regulations and laws. Preferred Qualifications: * Working knowledge of Oracle ERP software. APICS certification: CPIM, CSCP. * 3 years of experience in a chemical production environment. #DF1 PPG prides itself on the quality of its employees and as such, candidates who receive a job offer will be required to successfully pass a hair drug/toxins test and a background check. PPG Industries, Inc. offers an opportunity to grow and develop your career in an environment that provides a fulfilling workplace for employees, creates an environment for continuous learning, and embraces the ideas and diversity of others. All qualified applicants will receive consideration for employment without regard to sex, pregnancy, race, color, creed, religion, national origin, age, disability status, protected veteran status, marital status, sexual orientation, gender identity or expression, or any other legally protected status. PPG is an Equal Opportunity Employer. You may request a copy of PPG’s affirmative action plan by emailing ppgaap@ppg.com . To read more about Equal Employment Opportunity please see attached links: http://www1.eeoc.gov/employers/upload/eeoc_self_print_poster.pdf https://www.dol.gov/ofccp/pdf/EO13665_PrescribedNondiscriminationPostingLanguage_JRFQA508c.pdf https://www.dol.gov/ofccp/regs/compliance/posters/pdf/OFCCP_EEO_Supplement_Final_JRF_QA_508c.pdf **Organization:** **Industrial Coatings* **Title:** *Supply Chain Generalist* **Location:** *Wisconsin-Oak Creek* **Requisition ID:** *1700004711*
          BPDM Workshop   
The Broadening Participation in Data Mining Program (BPDM) will hold it’s annual workshop August 12-13, 2017, co-located with ACM SIGKDD 2017 Conference on Knowledge Discovery and Data Mining (KDD) in Halifax, Nova Scotia – Canada. Undergraduates, Graduates, and Postdocs underrepresented in Data Mining are encouraged to apply for travel scholarships by May 15, 2017 at […]
          เมืองอัจฉริยะ (Smart City)   

การใช้พลังงานพื้นฐานอย่างมีประสิทธิภาพ ภายใต้การจัดการจัดการระบบพลังงานพื้นฐาน การจัดการระบบน้ำ และการจัดการโครงสร้างพื้นฐานอื่นๆของเมือง จึงเป็นแนวทางที่ประเทศต่างๆ ทั่วโลกให้ความสำคัญ โดยเฉพาะเมืองที่มีการพัฒนาโครงสร้างสาธารณูปโภคพื้นฐานเพื่อก้าวสู่การเป็นเมืองอัจฉริยะ ที่มุ่งเน้นการมีพลังงานพื้นฐานใช้อย่างมั่นคงและยั่งยืน ดังนั้น เมืองอัจฉริยะ(Smart City) จึงเป็นเมืองเทคโนโลยีการจัดการพลังงานพื้นฐานและสื่อสารที่มาแรงและน่าจับตาที่สุด

เมืองอัจฉริยะ(Smart City) จึงเป็นโซลูชั่นที่ประชาชนหรือผู้อยู่อาศัยในเมืองอัจฉริยะ(Smart City) สามารถส่งข้อมูลและสื่อสารกันผ่าน ระบบเทคโนโลยีที่รวดเร็ว ไม่ว่าจะเป็นแบบมีสาย หรือระบบไร้สาย ทั้งยังต้องมาพร้อมกับระบบความปลอดภัย และยังต้องเป็นมิตรกับสิ่งแวดล้อมหรือประหยัดพลังงานพื้นฐานอีกด้วย

หัวใจหลักของการพัฒนาเมืองแบบ เมืองอัจฉริยะ(Smart City) คือการปรับใช้ระบบ IT เข้ามาช่วยในการบริหารจัดการเมือง โดยที่จะครอบคลุมในด้านพลังงานพื้นฐาน ได้แก่ การจัดการระบบขนส่ง การสร้างข้อมูลเพื่อช่วยในการตัดสินใจวางแผน การให้ข้อมูลกับประชากร การจัดการระบบพลังงานพื้นฐาน การจัดการระบบน้ำ และการจัดการโครงสร้างพื้นฐานอื่นๆของเมือง ถือว่าเป็นมิติใหม่ที่เชื่อมโยง 2 อุตสาหกรรมหลักเข้าด้วยกัน คือ อุตสาหกรรม IT และอุตสาหกรรมอสังหาริมทรัพย์ ซึ่งเป็นภาคอุตสาหกรรมยักษ์ใหญ่ในทุกประเทศ

การปรับใช้ระบบ IT ประกอบไปด้วยการใช้ระบบเก็บข้อมูล (Zigbee sensor) ในหลากหลายรูปแบบเพื่อให้ได้ข้อมูลเกี่ยวกับการบริโภค การดำรงชีวิต และสถานภาพของโครงสร้างพื้นฐานในเมือง โดยที่ระบบเก็บข้อมูลอาจจะเป็นการพัฒนาโครงสร้างโดยเฉพาะในการเก็บข้อมูล เช่น ระบบมิเตอร์อัจฉริยะ(Smart Meter) สำหรับเก็บข้อมูลการใช้ไฟฟ้าในรายละเอียดของแต่ละครัวเรือน หรือระบบตรวจจับสภาพจราจรแบบอัตโนมัติผ่านทาง กล้อง IP Camera หรือเรดาร์ชนิดต่างๆ อีกรูปแบบหนึ่ง

การเก็บข้อมูลคือการใช้ระบบ Crowd-Source คือเก็บข้อมูลโดยตรงผ่านจากการรายงานของประชากร ทั้งในรูปแบบการรายงานโดยตรงจากผู้ใช้ หรือการเก็บข้อมูลของผู้ใช้แบบอัตโนมัติ โดยที่ฐานข้อมูลแบบ Crowd-Source นั้นอาจรวมไปถึงข้อมูลจากระบบ Social Network ต่างๆ หรือ การตรวจจับการเคลื่อนที่ของประชากรผ่านทางสัญญาณในรูปแบบต่างๆที่ส่งออกมาจากกล้องวงจรปิด(IP Camera) พฤติกรรมการใช้บน Smart Phone ข้อมูลจากรูปแบบดังกล่าวเป็นข้อมูลที่เรียกว่า Big Data

ในการพัฒนาเมืองแบบ เมืองอัจฉริยะ(Smart City) คือการสร้างระบบในการวิเคราะห์ข้อมูลที่เก็บได้เพื่อหารูปแบบ พฤติกรรมของประชากร และสถานะของโครงสร้างพื้นฐานภายใต้ปัจจัยต่างๆ ยกตัวอย่างเช่นจากข้อมูล Social Network ผ่านทาง Twitter หรือ FaceBook ที่เก็บได้ในกรุงเทพฯ ในส่วนของข้อความที่เกี่ยวเนื่องกับความพึงพอใจในการให้บริการของรถประจำทาง ระบบในการวิเคราะห์ข้อมูลสามารถนำข้อมูลจาก Twitter หรือ FaceBook ผ่านกระบวนการวิเคราะห์ในแบบ Data Mining เพื่อวิเคราะห์ถึงลำดับความสำคัญของปัญหาการให้บริการรถประจำทางที่มีการพูดถึงเยอะที่สุด

การพัฒนาเมืองแบบ เมืองอัจฉริยะ(Smart City) คือองค์ประกอบในการจัดการ หรือบริหารเมือง ซึ่งจะเป็นการปรับใช้ผลในการวิเคราะห์ในการควบคุมจัดการโครงสร้างพื้นฐาน หรือ พฤติกรรมต่างๆของประชากร ยกตัวอย่างที่เห็นได้ชัดคือระบบสัญญาณไฟจราจร ถ้ามีความเป็นอัจฉริยะหลังจากได้รับข้อมูลกระแสจราจรจ และความแออัดในถนนต่างๆ จากระบบกล้องวงจรปิด(IP Camera) ระบบวิเคราะห์จะสามารถระบุถึงการปรับเปลี่ยนระยะเวลาสำหรับสัญญาณไฟที่จุดต่างๆเพื่อให้สอดคล้องกับทิศทางในการเดินทาง และความแออัดในพื้นที่ต่างๆได้ทันท่วงที หรืออาจจะเป็นการจัดการในรูปแบบการให้ข้อมูลกับทางประชากรถึงสถานะปัจจุบันของโครงสร้างพื้นฐานต่างๆเพื่อให้สามารถปรับพฤติกรรมได้

เมืองโบลเดอร์ รัฐโคโลราโด ประเทศสหรัฐอเมริกา ได้รับการขนานนามว่าเป็น เมืองอัจฉริยะ(Smart City) เนื่องจากมีการติดตั้งและใช้งานระบบ เมืองอัจฉริยะ(Smart City) ในครัวเรือนมาอย่างต่อเนื่องการดำเนินงานระยะที่ 1 มีการติดตั้งราว 45,000 หลังคาเรือน ซึ่งเป็นโครงการที่ได้ร่วมกับเมืองโดยประชาชนไม่ต้องเสียค่าใช้จ่าย ภายใต้การวางระบบ เมืองอัจฉริยะ(Smart City) ตลอดทั้งเมือง แต่ละหลังจะได้รับมิเตอร์อัจฉริยะ(Smart Meter) ทำงานร่วมกับชุดเครื่องมือส่งสัญญาณทางอินเทอร์เน็ตและอุปกรณ์ต่างๆ ที่จำเป็น โดยผู้ใช้สามารถตรวจสอบการใช้งานผ่านทางเว็บไซต์ สามารถนำไปวางแผนการใช้งานเพื่อประหยัดพลังงานพื้นฐานได้เป็นอย่างดี

การพัฒนาเมืองแบบ เมืองอัจฉริยะ(Smart City) ถือว่าเป็นภาคอุตสาหกรรมใหม่ที่ภาคเอกชนที่มีชื่อเสียงจากประเทศต่างๆได้ให้ความสำคัญ จากข้อมูลเบื้องต้นบริษัทด้านอุปกรณ์อิเล็กทรอนิคส์จากทั้งในสหรัฐอเมริกา ญี่ปุ่น หรือ ยุโรปได้กำหนดยุทธศาสตร์ในการสร้างสินค้า และโอกาสทางธุรกิจในด้าน เมืองอัจฉริยะ(Smart City) อย่างดุเดือด โดยบริษัทเหล่านี้เล็งเห็นถึงโอกาสได้การก้าวข้ามภาคธุรกิจเดิมในภาคผู้อุปโภคบริโภครายย่อยไปสู่การให้บริการในด้านการจัดการโครงสร้างพื้นฐานกับภาครัฐ และการพัฒนาในภาคอสังหาริมทรัพย์ ซึ่งจะเป็นการสร้างฐานธุรกิจ และชื่อเสียงของบริษัทในการช่วยยกระดับคุณภาพชีวิตของคนเมือง

Cr.ฐานเศรษฐกิจ


          Javier Rojas   
Institution/Organization: St. Thomas University Department: School of Science, Technology & Engineering Management Academic Status: Graduate Student What conference theme areas are you interested in (check all that apply): Data Analytics and Visualization Data-Driven Modeling and Prediction Scientific Software and High-Performance Computing Interests: Applied Mathematics Computer Science Big Data Analytics Data Mining Machine Learning  
          (SAU-DHAHRAN) Database (MaPS) Data Analyst/Administrator   
Location Saudi Arabia Position Database (MaPS) Data Analyst/Administrator Employment Type Full Time Regular ABOUT THIS JOB Baker Hughes Incorporated (BHI) has an opening for a Database (MaPS) Data Analyst/Administrator in Saudi . As a leader in the oilfield services industry, Baker Hughes offers opportunities for qualified people who want to grow in our high performance organization. Our leading technologies and our ability to apply them safely and effectively create value for our customers and our shareholders. Job Description Baker Hughes Saudi Arabia are looking for a self-motivated and passionate certified Data Analyst. The successful candidate will turn data into information, information into insight and insight into business decisions. Duties The responsibilities include conducting full cycle analysis including requirements and design. Data analysts will develop analysis and reporting capabilities. Responsibilities + Interpret data, analyze results using statistical techniques and provide ongoing reports + Develop and implement databases, data collection systems, data analytics and other strategies that optimize statistical efficiency and quality + Acquire data from primary or secondary data sources and maintain databases/data systems + Identify, analyze, and interpret trends or patterns in complex data sets + Filter and “clean” data by reviewing reports, printouts, and performance indicators to locate and correct source or coding problems + Work with management to prioritize business and information needs + Locate and define new process improvement opportunities Requirements + Proven working experience as a data analyst or business data analyst + Technical expertise regarding data models, database design development, data mining and segmentation techniques + Strong knowledge of and experience with reporting packages (Microsoft SQL Server Reporting Services (SSRS)) and Microsoft Power BI, databases (Microsoft SQL Server, SQL Server Management Studio (SSMS), etc.) + Knowledge of statistics and experience using statistical packages for analyzing datasets (Microsoft SQL Server Analysis Services (SSAS), Microsoft Excel, OLAP, R, Minitab) + Programming (XML, Javascript, etc.); Knowledge of Apache Tomcat (incl. Servlet and JSP, etc.) + Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy + Adept at queries, report writing and presenting findings + BS / MS in Mathematics, Economics, Computer Science, Information Management or Statistics + Ability to travel and live outside of Saudi if required COMPANY OVERVIEW Baker Hughes is a leading supplier of oilfield services, products, technology and systems to the worldwide oil and natural gas industry. By being the service company that best anticipates, understands and exceeds our customers' expectations, Baker Hughes Advances Reservoir Performance. The company's 39,000-plus employees work in more than 80 countries in geomarket teams that help customers find, evaluate, drill, produce, transport and process hydrocarbon resources. Baker Hughes' technology centers in the world's leading energy markets are pushing the boundaries to overcome progressively more complex challenges. Baker Hughes develops solutions designed to help manage operating expenses, maximize reserve recovery and boost overall return on investment through the entire life cycle of an oil or gas asset. Collaboration is the foundation upon which Baker Hughes builds our business and develops next-generation products and services for drilling and evaluation, completions and production and fluids and chemicals. For more information on Baker Hughes' century-long history, visit our website. _Baker Hughes is an Equal Employment Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to age, gender, gender identity, marital status, pregnancy, race, national origin, ethnic origin, color, disability status, veteran status, religion, sexual orientation or any other protection guaranteed by local law._ _If you are applying to a position in the US and you are an individual with disability or a disabled veteran status, religion, sexual orientation or any other protection guaranteed by local lawran and would like any type of assistance to submit an application or to attend any recruitment or selection event, we would like to help you to ensure that your experience is as smooth as possible. If you need assistance, information, or answers to your questions, feel free to contact us or have any of your representatives contact us at Baker Hughes Application Assistance Toll Free at 1-866-324-4562. This method of contact has been put in place ONLY to be used by those internal and external applicants who have a disability and are requesting accommodation._ _For all other inquiries on your application, log in to your profile and click on the My Jobpage tab. General application status inquiries will not be handled by the call center._ **Job:** _Engineering_ **Title:** _Database (MaPS) Data Analyst/Administrator_ **Location:** _MIDDLE EAST-SAUDI ARABIA-SAUDI ARABIA-DHAHRAN_ **Requisition ID:** _1707831_
          Visit GIGABYTE at the ISC 2015 Exhibition in Frankfurt   
For the second time, GIGABYTE will be present at the International Supercomputing Conference 2015 to showcase its latest high performance computing servers. If you are attending the show, don't hesitate to stop by booth #650 and say hello! Among the Gigabyte showcase, they will notably present:Extreme Core Density Designed for Scale-Out Computing The H270-T70 is a 2U 4-nodes rackmount system based on the Cavium ThunderX ARMv8 processor and the latest fruit of our efforts to bring accomplished ARM-based server products to the market thereby opening new doors for scale-out server workloads. With 384 cores in a 2U form factor, it packs serious computing power for the most core hungry applications. A New Storage Over Ethernet Solution The D120-S3G system is designed as an add-on storage expansion of existing server infrastructures and can support up to 100TB of raw capacity within a 1U rackmount, which makes it one of the highest density and lowest cost per TB storage systems available. Based on an Annapurna Labs ARM SoC, it connects to a network over a dual 10GbE SFP+ interface and to its drives via SATA III 6Gb/s ports, in order to provide appropriate bandwidth and transfer rates for applications such as cold storage, data archiving, video surveillance, and TV broadcast. A Single HPC Node With 8 x Computing Cards Our G250 Series is a lineup of 2U rackmounts with the unique ability to receive up to 8 double slot GPGPU or co-processing cards. With 4 lateral trays able to host 2 double slot cards each, the G250 can be fitted with the most powerful computing cards from Intel, NVIDIA or AMD, and opens the door to record computing performance for a single 2U system. Therefore, they are main contenders to accelerate applications in scientific simulation & modeling, engineering, visualization & rendering, data mining, and any other computing intensive programs.For more info about the show, please visit its official website. We hope to see you there!
          How Big Data Became So Big – Unboxed – NYTimes.com   
How big data became the new marketing term for businesses, in contrast to some of the dryer terminology like ‘data mining’, ‘business intelligence’ and ‘data analytics’. Quote: IT may seem marketing gold, but Big Data also carries a darker connotation, as a linguistic cousin to the likes of Big Brother, Big Oil and Big Government. […]
          Artist Profile: The Hitchhiker – South Korean indie outfit goes deep   

In William Gibson’s “Pattern Recognition”, a data miner sets out to aggregate and assemble a series of photographs produced by a mysterious creator who may be a sort of Kubrick-esque savant and whose stills elicit profound emotional impact and meaning when viewed. In some way, The Hitchhiker, hailing from the land of K-Pop and [...] [...]

The post Artist Profile: The Hitchhiker – South Korean indie outfit goes deep appeared first on Music Zeitgeist - The Best New and Indie Music Blog.


          Well-connected IT-enabled drug repurposing shop raises cash for rare disease R&D push    
Amadeus Capital Partners and Abcam founder Jonathan Milner have teamed up to invest in Healx, a British startup working with patient advocacy groups to repurpose drugs for rare diseases. Healx will use the money to step up the expansion of a data mining-based approach to drug discovery.
           [ELEARN] ICDIM 2017 - Twelfth International Conference on Digital Information    
Twelfth International Conference on Digital Information Management (ICDIM 2017)
Kyushu University, Fukuoka, Japan
September 12-14, 2017
www.icdim.org
Technically and Financially co-sponsored by TEMS, IEEE


Following the successful earlier conferences at Bangalore (2006), Lyon (2007), London (2008), Michigan (2009) , Thunder Bay (2010), Melbourne (2011), Macau (2012), Islamabad (2013), Thailand (2014) Republic of Korea (South Korea)(2015) and Porto (2016), the Twelfth event is being organized at Kyushu University, Fukuoka, Japan in 2017. 
The International Conference on Digital Information Management is a multidisciplinary conference on digital information management, science and technology. The principal aim of this conference is to bring people in academia, research laboratories and industry together, and offer a collaborative platform to address the emerging issues and solutions in digital information science and technology.

Digital Information technologies are gaining maturity and rapid momentum in adoption across disciplines. The digital community is producing new ways of using digital information technologies for integrating and making sense out of various data ranging from real/live streams and simulations to analytics data analysis, in support of mining of knowledge. The conference will feature original research and industrial papers on the theory, design and implementation of digital information systems, as well as demonstrations, tutorials, workshops and industrial presentations.

The Twelfth International Conference on Digital Information Management will be held during September 12-14, 2017 at Fukuoka, Japan

The topics in ICDIM 2017 include but are not confined to the following areas.
  •    Information Retrieval
  •    Data Grids, Data and Information Quality
  •    Big Data Management
  •    Temporal and Spatial Databases
  •    Data Warehouses and Data Mining
  •    Web Mining including Web Intelligence and Web 3.0
  •    E-Learning, eCommerce, e-Business and e-Government
  •    Natural Language Processing
  •    XML and other extensible languages
  •    Web Metrics and its applications
  •    Enterprise Computing
  •    Semantic Web, Ontologies and Rules
  •    Human-Computer Interaction
  •    Artificial Intelligence and Decision Support Systems
  •    Knowledge Management
  •    Ubiquitous Systems
  •    Peer to Peer Data Management
  •    Interoperability
  •    Mobile Data Management
  •    Data Models for Production Systems and Services
  •    Data Exchange issues and Supply Chain
  •    Data Life Cycle in Products and Processes
  •    Case Studies on Data Management, Monitoring and Analysis
  •    Security and Access Control
  •    Information Content Security
  •    Mobile, Ad Hoc and Sensor Network Security
  •    Distributed information systems
  •    Information visualization
  •    Web services
  •    Quality of Service Issues
  •    Multimedia and Interactive Multimedia
  •    Image Analysis and Image Processing
  •    Video Search and Video Mining
  •    Cloud Computing
  •    Intelligence Systems
  •    Artificial Intelligence Applications

+ Proceedings

- All the accepted papers will appear in the proceedings published by IEEE.
- All papers will be fully indexed by IEEE Xplore.
- All the ICDIM papers are indexed by DBLP.

General Chair

Taketoshi Ushiama (Kyushu University, Japan)

Honorary Chair
Toyohide Watanabe (Nagoya Industrial Science Research Institute, Japan)

Organizing Chair
Manabu Ohta (Okayama University, Japan)

Local Arrangement Chair
Toki Takeda (NTT, Japan)

Program Chairs
Ramiro Smano Robles, Instituto Superior de Engenharia do Porto Rua, Portugal
Yao-Liang Chung, National Taiwan Ocean University, Taiwan
Hung-Yuan Chung, National Central University, Taiwan

Important Dates


Full Paper Submission    July 1, 2017
Notification of Authors    August 1, 2017
Registration Due    September 1, 2017
Camera Ready Due    September 1, 2017
Workshops/Tutorials/Demos    September 13, 2017
Main conference    September 12-14, 2017

Submissions athttp://icdim.org/submission.html
Contact: conference at icdim.org

          Comment on Statistical SEO by John Demy   
Hey Mark!!! Big John--both Little John and I are facinated with data mining!!! Please post more!!! Big John
          (USA-WA-Seattle) Finance Manager, US Hardlines   
Want to help build Earth’s leading e-commerce finance team? At Amazon, it’s our goal to be Earth’s most customer-centric company, where customers can find and discover anything they might want to buy online. To support our continued growth, we are looking for the most exceptional finance professionals to join our Hardlines Finance team. The Hardlines business group is one of Amazon's four core retail businesses, along with Consumables, Softlines, and Media. Hardlines encompasses a vast and growing range of businesses from multi-billion dollar categories like Consumer Electronics, Toys and Furniture to new businesses like Treasure Truck; stand-alone subsidiaries like Fabric.com and Woot.com; cross portfolio centers of excellence such as Private Brands, and the new B2B channel, Amazon Business, that's building on all the things consumers love about Amazon to deliver a unique and compelling range of products and services specifically tailored for business customers. The Hardlines Finance team plays a critical leadership role partnering with the category leaders, operations and technology teams to continuously innovate, delight customers, grow and optimize their businesses for long term free cash flow. If you have a bias for action, can think big but also dive deep and don't settle for second best, we want to hear from you. We are seeking a Finance Manager to support the Furniture business, one of the fastest growing businesses at Amazon. This is an exciting opportunity that blends an individual's desire to drive business decision making with their finance acumen while supporting one of the largest and fastest growing businesses at Amazon. The successful candidate will work closely with the senior leadership team to provide strategic guidance and decision support in this rapidly evolving space. Primary responsibilities include: + Controllership of the businesses + Leading ad-hoc financial analyses and new business development deals + Leading and participating as the key finance stakeholder in cross functional teams + Managing the reporting on weekly financial and operational performance metrics + Long and short term financial planning + Driving cross-business analytic projects for senior management, with financial modeling, data mining and presentation support BA/BS degree in business, finance, or a related field 7+ years of relevant, progressive professional experience · MBA preferred · Highly analytical and detail oriented · Ability to develop new ideas and creative solutions · Ability to work successfully in an ambiguous environment · Ability to meet tight deadlines and prioritize workload · Excellent communication skills, both verbal and written · Customer focus and professional demeanor · Strong track record of business partnership · People management experience · Experience and ability to use Essbase, Cognos, or similar tools AMZR Req ID: 553133 External Company URL: www.amazon.com
          (USA-WA-Seattle) AmazonSmile Software Development Engineer   
We’re growing our software engineering team to expand the AmazonSmile program and make it even easier for our customers to support their favorite charitable organization every time they shop. As a Software Development Engineer, you’ll be working with our team responsible for the core infrastructure that powers the retail website, our charity registration portal (OrgCentral), and our donation processing system. You’ll have opportunities to process big data and design and implement services at high scale, all while supporting almost a million non profits across the country. You might be right for the team if: + You have a positive and optimistic personality. Setbacks motivate you to work harder. + You like to work with a wide array of technology (services, front-end, data mining) rather than specialize in one particular area. + You’d prefer to be thrown in the deep end and solve complex problems for yourself rather than have your hand held. + You adapt to change well, and aren’t particularly phased by course changes. + You truly care about the business results of what you build, in addition to the elegance of the technology you build. If that sounds like you and you meet the software engineering qualifications below, then you should apply for this position. If this isn't you but sounds like someone you know, then send this page their way. Our engineers are required to have these qualifications: + Bachelor’s Degree in Computer Science or related field + Proficiency in at least one modern programming language, such as C++, Java, or Python + 3+ years professional experience in software development + Solid understanding of Computer Science fundamentals like object-oriented design, data structures, algorithm design, problem solving, and complexity analysis + Experience designing and implementing distributed systems An engineer is particularly valuable to our team if they have any of these qualifications: + Experience in Python, Java, and Spring + Experience designing and implementing distributed systems at high scale + Excellent verbal and written communication skills + Experience brewing beer/hard cider or baking cookies Keywords: Software Engineer, Software Development Engineer, SDE, Software Developer, Programmer, Code Ninja, Hacker, Software Engineer, Charity, Giving AMZR Req ID: 553078 External Company URL: www.amazon.com
          (USA-WA-Seattle) Web Development Engineer   
We’re building a new platform to make it easier for teams across Amazon to launch targeted campaigns anywhere on the retail website, and we need a Web Development Engineer to help design, implement, and launch this V1 initiative. You’ll be a founding member of the team, on which we’ll have opportunities to build a suite of frontend widgets for teams using the platform, as well as design and launch our own experiments to help grow the core Amazon retail business. This team will have exposure to business and technology teams throughout the company as we flesh out the platform and onboard new clients. You might be right for the team if: + You have a positive and optimistic personality. Setbacks motivate you to work harder. + You like to work with a wide array of technologies (services, front-end, data mining) rather than specialize in one particular area. + You thrive when thrown in the deep end to solve ambiguous and complex problems for yourself rather than have your hand held. + You adapt to change well, and aren’t particularly phased by course changes. + You truly care about the business results of what you build, in addition to the elegance of the technology you build. If that sounds like you and you meet the software engineering qualifications below, then you should apply for this position. If this isn't you but sounds like someone you know, then send this page their way. Our engineers are required to have these qualifications: + Bachelor’s Degree in Computer Science or related field + Proficiency using modern web development technologies and techniques, including JavaScript, AJAX, HTML5, CSS, Responsive Design, web services, etc. + 3+ years professional experience in web software development + Expertise with browser tuning and optimization techniques / tools + Experienced in web security, SEO, accessibility and internationalization + Solid understanding of Computer Science fundamentals like object-oriented design, data structures, algorithm design, problem solving, and complexity analysis An engineer is particularly valuable to our team if they have any of these qualifications: + Experience designing and implementing distributed systems at high scale + Excellent verbal and written communication skills + Previous experience as a technical lead + Experience in communicating with business teams, other development teams, and management to collect requirements, describe technical designs, and coordinate deliverables + Experience brewing beer/hard cider or baking cookies Keywords: Software Engineer, Software Development Engineer, SDE, Software Developer, Programmer, Code Ninja, Hacker, Software Engineer, Front-end AMZR Req ID: 553069 External Company URL: www.amazon.com
          (USA-WA-Seattle) AmazonSmile Software Development Engineer   
We’re growing our software engineering team to expand the AmazonSmile program and make it even easier for our customers to support their favorite charitable organization every time they shop. As a Software Development Engineer, you’ll be working with our team responsible for the core infrastructure that powers the retail website, our charity registration portal (OrgCentral), and our donation processing system. You’ll have opportunities to process big data and design and implement services at high scale, all while supporting almost a million non profits across the country. You might be right for the team if: + You have a positive and optimistic personality. Setbacks motivate you to work harder. + You like to work with a wide array of technology (services, front-end, data mining) rather than specialize in one particular area. + You’d prefer to be thrown in the deep end and solve complex problems for yourself rather than have your hand held. + You adapt to change well, and aren’t particularly phased by course changes. + You truly care about the business results of what you build, in addition to the elegance of the technology you build. If that sounds like you and you meet the software engineering qualifications below, then you should apply for this position. If this isn't you but sounds like someone you know, then send this page their way. Our engineers are required to have these qualifications: + Bachelor’s Degree in Computer Science or related field + Proficiency in at least one modern programming language, such as C++, Java, or Python + 3+ years professional experience in software development + Solid understanding of Computer Science fundamentals like object-oriented design, data structures, algorithm design, problem solving, and complexity analysis + Experience designing and implementing distributed systems An engineer is particularly valuable to our team if they have any of these qualifications: + Experience in Python, Java, and Spring + Experience designing and implementing distributed systems at high scale + Excellent verbal and written communication skills + Experience brewing beer/hard cider or baking cookies Keywords: Software Engineer, Software Development Engineer, SDE, Software Developer, Programmer, Code Ninja, Hacker, Software Engineer, Charity, Giving AMZR Req ID: 553067 External Company URL: www.amazon.com
          (USA-WA-Seattle) Web Development Engineer   
We’re building a new platform to make it easier for teams across Amazon to launch targeted campaigns anywhere on the retail website, and we need a Web Development Engineer to help design, implement, and launch this V1 initiative. You’ll be a founding member of the team, on which we’ll have opportunities to build a suite of frontend widgets for teams using the platform, as well as design and launch our own experiments to help grow the core Amazon retail business. This team will have exposure to business and technology teams throughout the company as we flesh out the platform and onboard new clients. You might be right for the team if: + You have a positive and optimistic personality. Setbacks motivate you to work harder. + You like to work with a wide array of technologies (services, front-end, data mining) rather than specialize in one particular area. + You thrive when thrown in the deep end to solve ambiguous and complex problems for yourself rather than have your hand held. + You adapt to change well, and aren’t particularly phased by course changes. + You truly care about the business results of what you build, in addition to the elegance of the technology you build. If that sounds like you and you meet the software engineering qualifications below, then you should apply for this position. If this isn't you but sounds like someone you know, then send this page their way. Our engineers are required to have these qualifications: + Bachelor’s Degree in Computer Science or related field + Proficiency using modern web development technologies and techniques, including JavaScript, AJAX, HTML5, CSS, Responsive Design, web services, etc. + 3+ years professional experience in web software development + Expertise with browser tuning and optimization techniques / tools + Experienced in web security, SEO, accessibility and internationalization + Solid understanding of Computer Science fundamentals like object-oriented design, data structures, algorithm design, problem solving, and complexity analysis An engineer is particularly valuable to our team if they have any of these qualifications: + Experience designing and implementing distributed systems at high scale + Excellent verbal and written communication skills + Previous experience as a technical lead + Experience in communicating with business teams, other development teams, and management to collect requirements, describe technical designs, and coordinate deliverables + Experience brewing beer/hard cider or baking cookies Keywords: Software Engineer, Software Development Engineer, SDE, Software Developer, Programmer, Code Ninja, Hacker, Software Engineer, Front-end AMZR Req ID: 553068 External Company URL: www.amazon.com
          (USA-WA-Seattle) AmazonSmile Software Development Engineer   
We’re growing our software engineering team to expand the AmazonSmile program and make it even easier for our customers to support their favorite charitable organization every time they shop. As a Software Development Engineer, you’ll be working with our team responsible for the core infrastructure that powers the retail website, our charity registration portal (OrgCentral), and our donation processing system. You’ll have opportunities to process big data and design and implement services at high scale, all while supporting almost a million non profits across the country. You might be right for the team if: + You have a positive and optimistic personality. Setbacks motivate you to work harder. + You like to work with a wide array of technology (services, front-end, data mining) rather than specialize in one particular area. + You’d prefer to be thrown in the deep end and solve complex problems for yourself rather than have your hand held. + You adapt to change well, and aren’t particularly phased by course changes. + You truly care about the business results of what you build, in addition to the elegance of the technology you build. If that sounds like you and you meet the software engineering qualifications below, then you should apply for this position. If this isn't you but sounds like someone you know, then send this page their way. Our engineers are required to have these qualifications: + Bachelor’s Degree in Computer Science or related field + Proficiency in at least one modern programming language, such as C++, Java, or Python + 3+ years professional experience in software development + Solid understanding of Computer Science fundamentals like object-oriented design, data structures, algorithm design, problem solving, and complexity analysis + Experience designing and implementing distributed systems An engineer is particularly valuable to our team if they have any of these qualifications: + Experience in Python, Java, and Spring + Experience designing and implementing distributed systems at high scale + Excellent verbal and written communication skills + Experience brewing beer/hard cider or baking cookies Keywords: Software Engineer, Software Development Engineer, SDE, Software Developer, Programmer, Code Ninja, Hacker, Software Engineer, Charity, Giving AMZR Req ID: 553066 External Company URL: www.amazon.com
          (USA-WA-Seattle) Business Analyst, Air PM   
Amazon is seeking an experienced candidate to identify, create, develop and integrate innovative solutions and programs that lead to improvements in our transportation network. This position will lead the development and execution of new worldwide transportation initiatives designed to improve overall efficiency and meet the ever-growing demand for transportation capacity. The successful candidate will have strong data mining and modeling skills and is comfortable facilitating ideation and working from concept through to execution. To be successful in this role candidates must: + Have strong Math, Logic and Problem Solving Skills + Have strong written and verbal communication skills + Excellent problem-solving, task prioritization, follow-up, and customer service skills. + Ability to work well in a team environment + Ability to work in an ambiguous environment with little guidance or supervision Responsibilities include, but are not limited to: + Collect data and perform advanced modeling and statistical analysis + Model, evaluate and implement opportunities in a complex transportation network + Map, document and recommend process improvements + Assist in communications with internal and external customers to understand business requirements + Determine efficient utilization of resources + Actively engage with internal partners throughout the organization to meet and exceed customer service levels & transport-related KPI’s. + Research and implement cost reduction opportunities + Bachelor’s Degree in Logistics, Transportation, Engineering, Business Administration, Math, Finance + 2+ years of experience working in Advanced Excel + 2+ years of experience working with SQL + Experience preparing, reporting and interpreting large sets of data with accompanying business recommendations + Experience in modeling and analyzing complex logistics networks + Demonstrated evidence of reducing transportation, labor and inventory costs through application of logistics and supply chain optimization methodologies. + Experience communicating across all levels of management, peers, and clients + Experience using route and load optimization tools as well as transportation management systems + Experience in operation research model design + Advanced degree in Logistics or Transportation + Analyst/Project Management experience Amazon is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation. AMZR Req ID: 552610 External Company URL: www.amazon.com
          (USA-WA-Seattle) Data Engineer, Amazon Video   
Amazon Video (AV) is a digital video streaming and download service that offers Amazon customers the ability to rent, purchase or subscribe to a huge catalog of videos. This position focuses on the rental and purchase side of the Amazon Video business. As a Data Engineer in Amazon Video, you will work directly with stakeholders and technical partners to design and implement cutting edge data solutions that provide actionable insights to the business. You will be leading the charge in making granular event data easily usable and accessible, and participate in developing the technical strategy to do so. You will work with a wide range of data technologies (e.g. Kinesis, Spark, Redshift, EMR, Hive, and Tableau) and stay abreast of emerging technologies, investigating and implementing where appropriate. Our ideal candidate has outstanding technical skills, analytical capabilities, business insight, and communication skills, and maintains a strong passion for technology. In this role you will: 1. Design, develop, implement, test, document, and operate large-scale, high-volume, high-performance data structures for business intelligence analytics. 2. Partner with analysts, applied scientists, data engineers, business intelligence engineers, and software development engineers across Amazon to produce complete data solutions. 3. Interface directly with stakeholders, gathering requirements and owning automated end-to-end reporting solutions 4. Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Oracle, Redshift, and OLAP technologies. 5. Gather business and functional requirements and translate these requirements into robust, scalable, operable solutions that work well within the overall data architecture. 6. Evaluate and make decisions around dataset implementations designed and proposed by peer data engineers. + BS degree in information management, computer science, math, statistics, or equivalent technical field + 2+ years of relevant experience in business intelligence role, including data warehousing and business intelligence tools, techniques and technology, as well as experience in diving deep on data analysis or technical issues to come up with effective solutions + Mastery of relevant technical skills, including SQL, data modeling, schema design, data warehouse administration, BI reporting tools (e.g. Tableau), scripting for automation + Experience in data mining structured and unstructured data (SQL, ETL, data warehouse, Machine Learning etc.) in a business environment with large-scale, complex data sets + Proven ability to look at solutions in unconventional ways. Sees opportunities to innovate and can lead the way + Excellence in technical communication and experience working directly with stakeholders + 3+ years’ experience in Oracle and Redshift including complex querying, analytical functions, and database tuning for optimal query performance with large data sets + 3+ years’ experience in Datanet or other ETL technologies + Experience with data processing using custom scripts to pull and load from APIs or Files. + Experience with Python or similar programming/scripting language(s). AMZR Req ID: 552093 External Company URL: www.amazon.com
          (USA-WA-Seattle) Business Analyst, FBA Fees   
Fulfillment by Amazon (FBA) leverages Amazon’s global fulfillment and customer service network for third party sellers who want to grow their business on and off Amazon.com. FBA enables customers to take advantage of Free Super Saver Shipping and Amazon Prime on third party items, while sellers can focus on selling rather than shipping. The FBA Fee team is looking for an experienced and self - driven Business Analyst to join the team. The candidate is expected to leverage the latest in data mining and predictive modeling techniques to enhance our current pricing calculation models. The individual will be responsible for developing quantitative models to improve our understanding of Seller Behavior and to support other ongoing analytical efforts of FBA Fees team. Ideally, the candidate should be comfortable with working with ambiguous data, and data from multiple sources. You would be expected to analyze large datasets, identify trends and patterns, and uncover insights for key business decisions. The candidate will work closely with teams in Product Development, Marketing, Business Strategy, Supply Chain and Software Development on a day-to-day basis. What we are looking for: + Experience in mining large quantities of data using SQL and other tools. (required). + Experience is using Statistical and Econometric Concepts to solve real life business problems. + Strong problem solving skills. + Someone who can think big and be creative (while paying careful attention to detail), and will enjoy working in a fast-paced dynamic environment. Key Responsibilities: + Drive development of quantitative models necessary for the evaluation and implementation of new pricing strategies + Develop tools to understand Sellers’ behaviors related to pricing changes + Collaborate with product managers to develop pricing recommendations for new features or services + Partner with finance and product management as a leader of quantitative analysis + Communicate with software developers to insure proper implementation of complex models + Analyze and solve business problems at their root, stepping back to understand the broader context + Write high quality code to retrieve and analyze data + Learn and understand a broad range of Amazon’s data resources and know how, when, and which to use + Manage and execute entire projects or components of large projects from start to finish including project management, data gathering and manipulation, synthesis and modeling, problem solving, and communication of insights and recommendations + M.S. in a quantitative field such as Economics, Analytics, Mathematics, Statistics or Operations Research. + At least 2 years of relevant experience in analytics using advanced forecasting, optimization and/or machine learning techniques + Experience solving complex quantitative business challenges + Verbal/written communication & data presentation skills, including an ability to effectively communicate with both business and technical teams + Experience in data mining (SQL, ETL, data warehouse, etc.) and using databases in a business environment with large-scale, complex data + At least 4 years of relevant experience in advanced forecasting, optimization and/or machine learning techniques. + Ability to build model prototypes using appropriate tools (R/SAS/Python…) + Knowledgeable in demand modeling, pricing optimization, and customer/product segmentation AMZR Req ID: 550158 External Company URL: www.amazon.com
          (USA-WA-Seattle) Software Development Engineer   
Amazon Global Selling (AGS) is focused on breaking down barriers to allow 3rd-party Sellers to sell their items to Customers around the world. The AGS team develops software that removes friction from the process of cross border selling for 3rd-party Sellers. The AGS team is responsible for development of systems that enable Sellers to expand their business to new customers around the world through increased exports and listing of their products for sale in new countries. We need your help to grow this business by building highly-available and scalable distributed systems that provide clean interfaces between Sellers, Customers and Amazon's software. Within AGS, the Global Selling Intelligence (GSI) team is responsible for building a highly-available, scalable artificial intelligence platform that reduces the complexity of adding Machine Learning (ML) to Global Selling products and services for cross-border sellers. We collect petabytes of data from a variety of data sources inside and outside Amazon including Amazon’s Product catalog, seller inventory, customer orders, and page loads. Our data and ML platform enables ML exploration and production by providing services for AGS ML and tech teams to access data and make predictions hundreds of thousands of times per day, using Amazon Web Service’s (AWS) Redshift, Hive, Spark, etc. AGS is seeking an outstanding Software Development Engineer to join the Global Selling Intelligence (GSI) team. In this role, you will work in one of the world's largest and most complex data environments. You will apply your deep expertise in the design, creation, and management of large datasets to build highly-available systems for the extraction, ingestion, and processing of data at Amazon scale. In this role, you will own the end-to-end development of solutions to complex problems and play an integral role in strategic decision making. You will lead and mentor junior engineers and lead communications with management and other teams. + Bachelor’s Degree in Computer Science or related field + 3+ years of software development experience in at least one modern programming language (Python, Java, Scala, etc) + Experience with Object-Oriented Programming and Design + Strong Computer Science fundamentals in data structures, algorithms, problem solving, distributed systems, and complexity analysis + Experience with system architecture and design + Deep knowledge in data mining, machine learning, or information retrieval. + Experience with Big Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, etc.) + Master's Degree in Computer Science, Math or a related field + Industry experience as a Back-End Software Engineer or related specialty + Experience building highly available, distributed systems for data extraction, ingestion, and processing of large data sets in production + Experience building data products incrementally and integrating and managing datasets from multiple sources + Experience with AWS technologies including Redshift, Aurora, S3, EMR, EML + Experience with unstructured data in NoSQL databases + Knowledge of professional software engineering best practices including coding standards, code reviews, source control management, configuration management, build processes, testing, and operations + Experience with Agile software development in a UNIX/Linux environment + Strong written and spoken communication skills Amazon is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation. AMZR Req ID: 545376 External Company URL: www.amazon.com
          (USA-WA-Seattle) Sr Client Manager   
**Job Description:** **ABOUT THE JOB (JOB BRIEF)** As an integral member to the Treasury Management (TM) Corporate Bank Sales team, collaborates to acquire, expand and retain business clients who have treasury management needs. Maintains the “first point of contact” role for any customer service needs, inquiries or problems and ensures that clients receive a full and effective after-sales experience. **ESSENTIAL JOB FUNCTIONS** + Independently collaborates with partners to expand and retain business clients who have treasury management needs. + Own maintenance of cash management services to fully meet the needs of existing clients including project completion and tracking (e.g. product/service updates and upgrades; new service level introduction/enhancements, compliance projects, etc.) as well as necessary review of any existing services utilized. + Prepare, deliver, and review agreements. + Adhere to all Risk policies and procedures including ECP Risk Testing initiatives. + Mentor and coach Client Managers on team + Independently initiate and conduct necessary research that may need to be done in conjunction with on-going client needs. + Generate cross-sell leads to sales teams and take lead position in regular Client Management sales initiatives + Maintain a more significant or in depth portfolio/work load than that of a Treasury Client Manager _In some cases and on some teams the following job functions may also apply:_ + Partner with the Implementation Specialist Team on more complex client implementations + For more complex client implementations, will work with Client Implementation Specialist to manage the request for new product set-up, ensuring timely processing of Client Service Orders (CSO’s), verifying products are properly set-up in accordance with published service level agreements, providing technical support and training to clients concerning product usage and functionality and ensuring client satisfaction. + Working with the Customer Account Maintenance (CAM) Group, open new DDA accounts field incoming client calls to effectively resolve questions, problems or concerns regarding account opening and/or maintenance requests within established service level agreements. **REQUIRED QUALIFICATIONS** + Bachelor's Degree or similar work experience with 3 or more years of banking or cash management experienc + Demonstrated success in a customer service environmen + Possess strong independent analytical and data mining skill + Excellent verbal and written communication skills. Previous experience presenting to clients + Self-motivated and ability to participate effectively in a highly collaborative work team + Excellent organizational skills with the ability to set priorities and handle difficult situations while maintaining strong personal relationships + Detail oriented and ability to follow through + Proficient knowledge of Microsoft Office Programs including Word, Excel and PowerPoint + Demonstrated understanding of working capital + Capable of mentoring others + Expertise in areas of risk adherence + Demonstrates ability to actively engage management and others with innovative ideas to enhance team’s overall performance **PREFERRED QUALIFICATIONS** + Certified Cash Manager (CCM)/Certified Treasury Professional (CTP) preferred **ABOUT KEY:** KeyCorp's roots trace back 190 years to Albany, New York. Headquartered in Cleveland, Ohio, Key is one of the nation's largest bank-based financial services companies, with assets of approximately $134.5 billion at March 31, 2017. Key provides deposit, lending, cash management, insurance, and investment services to individuals and businesses in 15 states under the name KeyBank National Association through a network of more than 1,200 branches and more than 1,500 ATMs. Key also provides a broad range of sophisticated corporate and investment banking products, such as merger and acquisition advice, public and private debt and equity, syndications, and derivatives to middle market companies in selected industries throughout the United States under the KeyBanc Capital Markets trade name. KeyBank is Member FDIC. **ABOUT THE BUSINESS:** Key Corporate Bank is a full-service corporate and investment bank focused principally on serving the needs of middle market clients in seven industry sectors: consumer, energy, healthcare, industrial, public sector, real estate, and technology. Key Corporate Bank delivers a broad product suite of banking and capital markets products to its clients, including syndicated finance, debt and equity capital markets, commercial payments, equipment finance, commercial mortgage banking, derivatives, foreign exchange, financial advisory, and public finance. **FLSA STATUS:** Exempt KeyCorp is an Equal Opportunity and Affirmative Action Employer committed to engaging a diverse workforce and sustaining an inclusive culture. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status. **Employee Type:** Full-Time **Location:** Seattle, WA **Experience:** Not Specified **Date Posted:** 6/28/2017
          President Obama, listening in on good Americans   
Hope everyone will let me slide on not having any new blogs in recent weeks. I moved to a new apartment in Key West and don't have WiFi so I now go to the Wendy's on North Roosevelt and get a Frostie and write. Not what one thinks of when one imagines life in Key West.
But since I've been gone for a little bit, I am just going to run a few thoughts past you.
First, my novel, "Maddie's Gone" is getting good reviews and the little dog has begun to work her way into people's hearts: people who are considering killing their boyfriends, people who listen to shrimp captains spin lies, and people who try to steal dogs for ransom, that is.
One of the great things about writing a book is that you get to talk about yourself, which is my favorite subject.
See my TV interview where I discuss Maddie's plight here

Privacy on the half-shell

Since last I wrote this blog, we've learned that the National Security Agency, which was once barred from aiming its eaves-dropping electronics at American soil, has been using software that can capture and save vast buckets of voice, data, and video traffic from America's largest telecommunications networks.
We have heard this all before: That Americans who are doing the right thing have nothing to fear; that a warrant must be granted before contractors can open our email packets; and that there are wise people overseeing the sniffing programs.
Humans, as we know, are fallible, have bad days, have bad intentions, and screw up all the time. I do not trust any well-intentioned spying or data mining operation that seeks to find out what we're talking about to who.
I say the White House must notify any Americans, in writing, whose electronic traffic it has stopped and not found useful. In other words, if my email is being read and discarded as not criminal or dangerous, then the government must tell me. Just an idea; that way, innocent Americans know their information is being captured.
I found it odd that a week after Americans went crazy nuts over learning that privacy is not real, the CIA told Congress that it had foiled dozens and dozens of terrorist plots since starting the communications-mining program. The timing was meant to convince Americans that the program was necessary to stop attacks.
What about all the conversations between Boston and Chechnya? Didn't stop the Brothers Karama-bomb from killing and maiming. In fact, the FSB (once the KGB) and the FBI were in full conversation about the two brothers and they couldn't stop those two. So any arguments that reading and listening en masse to our digital traffic is necessary to halt terrorist attacks make no sense to me.

Key West predicts hurricane this year

Locals in Key West are nodding their heads as summer heats up.  There will be a storm this year. Why? Higher tempeatures than usual and a two-week rain field that stalled over Key West. The wind and the soggy skies continue to flow in from the southeast, the direction from which most storms come.
Also, there is a dust that coats car windshields and the surfaces of swimming pools in backyards. That's sand from the Sahara following the high-level wind currents that flow steadily from the coast of Africa westward.
It's time to get the gallon jugs of water; batteries, candles, hand-cranked radios and gas generators. Also, booze, cigarettes, and well, it's up to one's own needs.
There still is no shelter for Keys residents on the mainland. We used to drive to Florida International University in Miami and hang out in a large building there, but that is no longer available to Key Westers. The governor, who is a staunch, right-wing Republican, hasn't yet named a new mainland shelter for us liberals down here in the bottom of the Keys.
I hope someone is listening to his electronic traffic.

Talk to you all soon!
John Guerra



          Metrics and Analytics Architect munkakörbe keresünk munkatársat. | Feladatok: Establish strong ...   
Metrics and Analytics Architect munkakörbe keresünk munkatársat. | Feladatok: Establish strong relationships with key stakeholders of all Supply Chain relevant metrics to gain an understanding of their strategies, objectives, and tactics to develop and improve a comprehensive measurement plan • Leverage advanced analytics techniques on large and disparate data sets to define Supply Chain metrics and analytics, to include building and managing algorithms. Create visual tools and communicate results for use by the organization • Offer interpretation and insights on analytical findings in support of formulating analytics-enabled solutions relative to identified strategic objectives • Lead creation and continuous improvement of monitoring and analysis of Supply Chain metrics, developing standard reporting with visualizations to aid decision making • Continuously work on automating reporting and build in process efficiencies to minimize manual analyses • Assist in the development of the Supply Chain Annual Operating Plan. Provide regular reporting to key stakeholders. | Elvárások: Bachelor?s degree in business analysis, statistics, data analysis or related field or equivalent work experience • +3 years in similar role or equivalent work experience • A background in data analysis and data mining techniques • Excellent organizational and project management skills • Ability to work well in cross-functional teams and prioritize requests from multiple stakeholders globally. • Ability to ensure acceptance by other departments within the organization stakeholder management skills • Ability to interact with multiple levels of leadership • Knowledge of SAP or other ERP system • Experience in managing business improvement projects a plus • Experience in Supply Chain, Logistics, Procurement, Global Trade Compliance a plus • Master User of MS Office Suite • Excellent written/verbal/presentation communication skills • Fluent in English ? other language skills a plus | További infó és jelentkezés itt: www.profession.hu/allas/1042849
          Data Scientist   
Job Description

Who We’re Looking For

We are looking for an analytically-oriented data scientist to join our growing team and help build cutting edge data processes and products.

 

Your responsibilities would include:


Conducting data mining
Building predictive models
Developing proprietary algorithms and assets for clients
Drawing key insights to drive critical decision making
Creating high quality data driven presentations
Collaborating with peers and managers to consistently exceed client expectations
Participating in the full cycle of strategic client analysis projects from client facing kick-off to code delivery or implementation


 

The ideal candidate will have:


2+ years of hand-on experience doing quantitative analysis/modeling, technical/coding work
A desire to work on big data, predictive modeling, and machine learning projects (but not alone!)
Superb communication and presentation skills
Ability to think strategically, analytically, and proactively about diverse business problems
Passion for managing the quality & accuracy of analytics, including checking your and others’ work
Experience multi-tasking in a fast-paced – and a fast-growing – small company or professional services/consulting firm environment is a plus
Proficiency with Python
Expertise in SQL and relational databases
Familiarity with AWS
Exposure to big data tools such as Hadoop, Hive, Sqoop, Pig, Impala or Spark is a big plus
Experience in advertising or media & entertainment are a plus
Experience with predictive analytics / data science is a plus
Degrees in Physics, Mathematics ...


Read More on Datafloq

          Data Engineer with Scala/Spark and Java - Comtech LLC - San Jose, CA   
Job Description Primary Skills: Big Data experience 8+ years exp in Java, Python and Scala With Spark and Machine Learning (3+) Data mining, Data analysis
From Comtech LLC - Fri, 23 Jun 2017 03:10:08 GMT - View all San Jose, CA jobs
          Discrimination-aware data mining   
Pedreschi, Dino and Ruggieri, Salvatore and Turini, Franco (2007) Discrimination-aware data mining. Technical Report del Dipartimento di Informatica . Università di Pisa, Pisa, IT.
          Research - Meetup.com Group List - Gather emails and contact info - Upwork   
I will provide you with a link to the members section of a meetup.com group (  such as https://www.meetup.com/Women-in-Bitcoin/members/   ). I would like you to put each member listed in the group onto a spreadsheet and then find their email address, LinkedIn profile, state and city they currently live/work in, current employer, current job title, and any other relevant contact information.

There are 308 members listed in the meetup.com group I provided above. Can you please provide your bids and approximately how long it would take you to complete this project.

Best,
Kyle

Budget: $150
Posted On: June 29, 2017 03:19 UTC
Category: Admin Support > Web Research
Skills: Data Entry, Data Mining, Data Scraping, Internet Research, LinkedIn Recruiting, Market Research, Spreadsheets, Web Scraping
Country: United States
click to apply
          PCB Design Software Global Market by Geography & Data Mining, Analysis and Forecast 2022   
(EMAILWIRE.COM, February 25, 2017 ) The Global Printed Circuit Board (PCB) Design Software Market is growing at a CAGR of 5.3% during the forecast period. One of the key drivers for market growth is incessant efforts to lessen the time to design the PCB for inventive electronic devices. In addition,...
          (USA-CA-Menlo Park) Data Analyst - Policy Research   
**PDS Tech, Inc is seeking a candidate for the Data Analyst-Policy Research position located in Menlo Park, CA.** + Experience with GIS and pattern analysis; Ability to GIS technologies, such as QGIS, ArcGIS, GDAL, and / or R geospatial packages, to visualize complex maps and/or perform complex spatial analyses to evaluate data quality, and build narratives around geospatial data + Experience automating ETL pipelines, and curating large, complex data from a variety of systems, and creating visualizations that surface analytical insights + Understanding of statistical analysis methodologies and experience using R or python for statistical analysis and modeling; Proven familiarity with data mining, analysis, and visualization + Problem-solving and analytical skills combined with the understanding of where to look next; demonstrated ability to explain complex concepts and solutions to varying audiences + Work closely with policy research team to understand their metrics needs, developing a data analysis strategy that both fulfills current needs and can evolve to anticipate future ones + Tailor metrics reporting output to a wide variety of audiences; determine best audience for data-driven insights. + 3 years of relevant work experience including database experience with SQL, Hive, query optimization and operating with data schemas. + Experience programming (i.e. Python, R, etc) with version control. + Bachelor's degree in a quantitative social science such as statistics, public policy geography or something similar **Required** + ANALYTICAL INSIGHTS + ARCGIS + DATA ANALYSIS + DATA ANALYST + DATA MINING **Additional** + DATA QUALITY + DATABASE + ETL + GIS + HIVE + METRICS + OPTIMIZATION + PYTHON + SOLUTIONS + SQL + STATISTICAL ANALYSIS + VERSION CONTROL + VISUALIZATION + EXTRACT, TRANSFORM, AND LOAD + MARKETING ANALYSIS + PROBLEM-SOLVING + PUBLIC POLICY + QUANTITATIVE + STATISTICS This position may require you to submit to and pass a drug test and/or background check. If this is a hands-on position, you may also be required to pass a safety and productivity examination. PDS Tech, Inc. will comply with all applicable federal and state laws governing the use of such background checks and drug tests. PDS Tech, Inc. is an Equal Opportunity Employer and will not discriminate against applicants on the basis of race, color, religion, sexual orientation, gender identity, national origin, veteran status, or disability. Founded in 1977, PDS Tech, Inc. is one of the nation's premier specialty staffing firms with 31 offices nationwide. We offer a full range of benefits including: + Health insurance + Paid holidays + Weekly payroll + Immediate 401(k) eligibility + Completion Bonuses + Training + Please note availability of benefits may vary by position PDS specializes in Engineering and IT arenas including Aerospace, Defense, Electronics, Telecommunications, Automotive, and Energy just to name a few. Our reputation, track record, and years of continuous growth reflect the commitment to quality that our employees and clients experience first-hand. To find out more about PDS, check out http://www.pdstech.com/ **$$$ PDS pays for referrals! $$$** We pay thousands each month in referral bonuses! Contact a recruiter for details To find a recruiter near you, look at our Branch Locator. at http://www.pdstech.com/pds_locations.htm
          (USA-FL-Tampa) Decision Science Analyst   
Purpose of Job IMPORTANT: External Applicants – When filling out your name and other personal information below, DO NOT USE ALL CAPS or any special characters. Use only standard letters in the English alphabet. Including special characters or all uppercase letters will cause errors in your application. We are currently seeking talented Decision Science Analyst I (AML) for our Phoenix, AZ or San Antonio, TX facility. The ideal candidate for this position will have experience using mathematical and statistical analysis to assist management in making risk based anti-money laundering decisions related to products, services and customers. This position requires the use of communication skills to convey the results of statistical analysis to various levels of management. Provide decision support for business areas across the enterprise. Staff in this area will be responsible for applying mathematical and statistical techniques and/or innovative /quantitative analytical approaches to draw conclusions and make 'insight to action' recommendations to answer business objectives and drive change. The essence of work performed by the Decision Science Analyst involves gathering, manipulating and synthesizing data (e.g., attributes, transactions, behaviors, etc.), models and other relevant information to draw conclusions and make recommendations resulting in implementable strategies. Job Requirements * Leverages business/analytical knowledge to participate in discussions with cross-functional teams to understand and collaborate on business objectives and influence solution strategies. The business problems analyzed are typically medium to large scale with impact to current and/or future business strategy. * Applies innovative and scientific/quantitative analytical approaches to draw conclusions and make 'insight to action' recommendations to answer the business objective and drive the appropriate change. Translates recommendation into communication materials to effectively present to colleagues for peer review and mid-to-upper level management. Incorporates visualization techniques to support the relevant points of the analysis and ease the understanding for less technical audiences. * Identifies and gathers the relevant and quality data sources required to fully answer and address the problem for the recommended strategy through testing or exploratory data analysis (EDA). Integrates/transforms disparate data sources and determines the appropriate data hygiene techniques to apply. * Thoroughly documents assumptions, methodology, validation and testing to facilitate peer reviews. Subsequent analysts should be able to rely on documentation to replicate and continue work. * Understands and adopts emerging technology that can affect the application of scientific methodologies and/or quantitative analytical approaches to problem resolutions. * Delivers analysis/findings in a manner that conveys understanding, influences mid to upper level management, garners support for recommendations, drives business decisions, and influences business strategy. Recommendations typically have an impact on business results. *Minimum Requirements* * If Bachelor's degree, 4+ years experience in a decision support analytic function Or If Master's Degree, 2+ years experience in a decision support analytic function Or If PhD, 1+ years experience in a decision support analytic function * Bachelor's degree in Economics, Finance, Statistics, Mathematics, Actuarial Sciences or other quantitative discipline. (Four years work experience in statistics, mathematics or quantitative analytics or related experience can be substituted in lieu of a degree in addition to the minimum years of work experience required. *8 years total) OR A Master's Degree in quantitative analytics or a related field OR A PhD in quantitative analytics or a related field *Qualifications may warrant placement in a different job level.* When you apply for this position, you will be required to answer some initial questions. This will take approximately 5 minutes. Once you begin the questions you will not be able to finish them at a later time and you will not be able to change your responses. *Preferred* * Experience in categorical data analysis * Experience in analyzing customer, transactional, and financial product data collectively * Ability to provide robust documentation that details full process leading to analytic findings * Experience partnering with IT to deploy results of analysis into production * Financial Services industry experience in machine learning, statistical modeling, optimization or data mining involving large data sets * Proficiency in data visualization and strong programming skills (VBA, SAS, SPSS, SQL, R or Python) * Experience with very large transactional systems or with relational databases such as Oracle, SQL Server *Knowledge/Skills/Attributes* - Demonstrates competency in mathematical and statistical techniques and approaches used to drive fact-based decision-making. - Experience presenting and communicating findings/recommendations to team members. - Advanced knowledge of data analysis tools; - Advanced knowledge in developing analysis queries and procedures in SQL, SAS, BI tools or other analysis software. - Advanced knowledge of relevant industry data & methods and demonstrated ability to connect external insights to business problems. - Demonstrated ability to influence business decisions. The above description reflects the details considered necessary to describe the principal functions of the job and should not be construed as a detailed description of all the work requirements that may be performed in the job. At USAA our employees enjoy one of the best benefits packages in the business, including a flexible business casual or casual dress environment, comprehensive medical, dental and vision plans, along with wellness and wealth building programs. Additionally, our career path planning and continuing education will assist you with your professional goals. *Relocation* assistance is *not* *available* for this position. *For Internal Candidates:* Must complete 12 months in current position (from date of hire or date of placement), or must have manager’s approval prior to posting. *Last day for internal candidates* *to apply to the opening is 5/03/17 by 11:59 pm CST time.* *Decision Science Analyst* *FL-Tampa* *R0010013*
          Upcoming "Digital Self-Defense and Empowerment Workshops"    

For those of you in the NYC area this summer, the New Museum is holding a fascinating-sounding series of "Digital Self-Defense and Empowerment Workshops" on July 22nd.

This afternoon of workshops extends the exhibition's inquiry into the complexities of determining identity and truth to the online sphere. Addressing increasing vulnerability and participation in surveillance, artists and activists will offer tools to learn about how data is mined and fed back to us, as well as strategies for self-protection, particularly for members of vulnerable communities. Workshops will support the demystification of hidden processes through both tactile, hands-on experiences and analytic software.

The workshops will be presented in three separate parts throughout the day.

Part 1: Handmade Computers with Taeyoon Choi

What is computer, really? Computer is an idea that's evolved over time. The sleek machines we use day-to-day are made from elements extracted from the earth, and every bit of data is actually something, somewhere. And underneath the operating systems, there's a history that needs to be examined.


Let's build a computer, from its most fundamental elements: Adder, Clock, and Memory. By handmaking a computer, soldering electronic components, we may find an elegance in the abstraction and repetition of computational logic that can only be described as "poetic." The 1-Bit Computer Kit is an open-source tool and curriculum for making computing more accessible. By learning how computers work on a fundamental level, participants can gain agency and imagine a reciprocal relationship with technology. We can make technology more approachable by giving access to tools and ideas and demystifying computer science.

Part 2 - Data Selfie with DATA X

DATA X will demonstrate Data Selfie, a browser extension that aims to provide a personal perspective on data mining, predictive analytics, and our online data identity—including inferred information from our consumption. Algorithms and Big Data are increasingly defining our lives. Therefore, it is important—especially for those who are indifferent to this issue—to be aware of the power and influence your own data has on you. DATA X believes in information transparency, online consumer protection, and the democratization of the internet.

Part 3 - Digital Self-Defense with Equality Labs

Equality Labs, a South Asian women's, gender non-conforming, and trans tech collective will present a security self-defense training for your digital movement. In this two-hour workshop, learn more about the surveillance state and how you can be part of a collective self-defense movement to secure our phones, computers, network access, identities, and communication.

Tickets are free with pre-registration, and it looks like you cannot register for all three in one fell swoop, but need to register for each workshop individually. To do so, click on any of the links above.



          Comment on Event data mining with PowerShell by Geoff @ UVM - Custom event log queries   
[...] can save use the query XML with PowerShell’s Get-WinEvent commandlet’s -filterXML parameter [See an example]. You can also use the Save Filter to Custom View option to make this view [...]
          DATA SCIENCE SOFTWARE DEVELOPER - IRELAND - Overstock.com - Ireland, WV   
Design, develop, and maintain data mining jobs in Java, Scala, Python, Hadoop, Spark, R, SQL, and/or other query language....
From Overstock.com - Fri, 23 Jun 2017 19:34:32 GMT - View all Ireland, WV jobs
          updated profile   

AlchemyAPI Face Detection and Recognition Image

API Endpoint: 
http://gateway-a.watsonplatform.net/calls/image/ImageGetRankedImageFaceTags
API Description: 
The AlchemyVision Face Detection and Recognition Image API accepts an image file as an input. The API will scan a photo to detect facial locations and can recognize individuals present within a photograph, such as celebrities. The API will provide data on bounding box, gender, approximate age and name, if the image is of a celebrity. The extracted metadata can be returned in both XML, RDF, and JSON formats.
How is this API different ?: 
SSL Support: 
Yes
API Forum / Message Boards: 
Twitter URL: 
https://twitter.com/AlchemyAPI
Developer Support URL: 
support@alchemyapi.com
Interactive Console URL: 
Support Email Address: 
Authentication Model: 
Primary Category: 
Secondary Categories: 
API Provider: 
Popularity: 
0
Device Specific: 
No
Supported Response Formats: 
Is This a Hypermedia API?: 
Yes
Supported Request Formats: 
Architectural Style: 
Version: 
Description File URL (if public): 
Other Request Format: 
Other Response Format: 
Type of License if Non-Proprietary: 

          Oracle considered buying Peter Thiel's Palantir last year — and an ex-Disney exec set up the meeting (ORCL)   

Larry Ellison wine glassAP

Larry Ellison, Oracle's founder, chief technology officer, and largest shareholder, met with Palantir chairman Peter Thiel for lunch in 2016 to talk about Oracle buying Thiel's company, Bloomberg reported. 

Details about the secret meeting came out of a court testimony from Palantir investor Marc Abramowitz, who is suing Palantir over allegations that he was prevented from selling his stake in the data mining company. 

Ellison and Thiel's lunch was set up by Abramowitz and Michael Ovitz, a former executive at Walt Disney, to help the two companies broker a deal, according to Abramowitz's testimony. But the buyout never went through. 

Abramowitz also testified that Goldman Sachs pitched Palantir on the idea of going public in 2015 with a $30 billion offering, Bloomberg reported.

Oracle declined a request from Business Insider for comment. Palantir did not immediately respond to a request for comment. 

NOW WATCH: The inventor of Roomba has created a weed-slashing robot for your garden


          How Artificial Intelligence Will Change Medical Imaging   
AI, deep learning, artificial intelligence, medical imaging, cardiology, echo AI, clinical decision support, echocardiography

An example of artificial intelligence from the start-up company Viz. The image shows how the AI software automatically reviews an echocardiogram, completes an automated left ventricular ejection fraction quantification and then presents the data side by side with the original cardiology report. The goal of the software is to augment clinicians and cardiologists by helping them speed workflow, act as a second set of eyes and aid clinical decision support.

An example of how Agfa is integrating IBM Watson into its radiology workflow. Watson reviewed the X-ray images and the image order and determined the patient had lung cancer and a cardiac history and pulled in the relevant prior exams, sections of the patient history, cardiology and oncology department information. It also pulled in recent lab values, current drugs being taken. This allows for a more complete view of the patient's condition and may aid in diagnosis or determining the next step in care.  

Artificial intelligence (AI) has captured the imagination and attention of doctors over the past couple years as several companies and large research hospitals work to perfect these systems for clinical use. The first concrete examples of how AI (also called deep learning, machine learning or artificial neural networks) will help clinicians are now being commercialized. These systems may offer a paradigm shift in how clinicians work in an effort to significantly boost workflow efficiency, while at the same time improving care and patient throughput. 

Today, one of the biggest problems facing physicians and clinicians in general is the overload of too much patient information to sift through. This rapid accumulation of electronic data is thanks to the advent of electronic medical records (EMRs) and the capture of all sorts of data about a patient that was not previously recorded, or at least not easily data mined. This includes imaging data, exam and procedure reports, lab values, pathology reports, waveforms, data automatically downloaded from implantable electrophysiology devices, data transferred from the imaging and diagnostics systems themselves, as well as the information entered in the EMR, admission, discharge and transfer (ADT), hospital information system (HIS) and billing software. In the next couple years there will be a further data explosion with the use of bidirectional patient portals, where patients can upload their own data and images to their EMRs. This will include images shot with their phones of things like wound site healing to reduce the need for in-person follow-up office visits. It also will include medication compliance tracking, blood pressure and weight logs, blood sugar, anticoagulant INR and other home monitoring test results, and activity tracking from apps, wearables and the evolving Internet of things (IoT) to aid in keeping patients healthy.

Physicians liken all this data to drinking from a firehose because it is overwhelming. Many say it is very difficult or impossible to go through the large volumes of data to pick out what is clinically relevant or actionable. It is easy for things to fall through the cracks or for things to be lost to patient follow-up. This issue is further compounded when you add factors like increasing patient volumes, lower reimbursements, bundled payments and the conversion from fee-for-service to a fee-for-value reimbursement system. 

This is where artificial intelligence will play a key role in the next couple years. AI will not be diagnosing patients and replacing doctors — it will be augmenting their ability to find the key, relevant data they need to care for a patient and present it in a concise, easily digestible format. When a radiologist calls up a chest computed tomography (CT) scan to read, the AI will review the image and identify potential findings immediately — from the image and also by combing through the patient history  related to the particular anatomy scanned. If the exam order is for chest pain, the AI system will call up:

  • All the relevant data and prior exams specific to prior cardiac history;
  • Pharmacy information regarding drugs specific to COPD, heart failure, coronary disease and anticoagulants;
  • Prior imaging exams from any modality of the chest that may aid in diagnosis;
  • Prior reports for that imaging;
  • Prior thoracic or cardiac procedures;
  • Recent lab results; and
  • Any pathology reports that relate to specimens collected from the thorax.

Patient history from prior reports or the EMR that may be relevant to potential causes of chest pain will also be collected by the AI and displayed in brief with links to the full information (such as history of aortic aneurism, high blood pressure, coronary blockages, history of smoking, prior pulmonary embolism, cancer, implantable devices or deep vein thrombosis). This information would otherwise would take too long to collect, or its existence might not be known, by the physician so they would not have spent time looking for it.   

Watch the VIDEO “Examples of Artificial Intelligence in Medical Imaging Diagnostics.” This shows an example of how AI can assess aortic dissection CT images.
 

Watch the VIDEO “Development of Artificial Intelligence to Aid Radiology,” an interview with Mark Michalski, M.D., director of the Center for Clinical Data Science at Massachusetts General Hospital, explaining the basis of artificial intelligence in radiology.

At the 2017 Health Information and Management Systems Society (HIMSS) annual conference in February, several vendors showed some of the first concrete examples of how this type of AI works. IBM/Merge, Philips, Agfa and Siemens have already started integrating AI into their medical imaging software systems. GE showed predictive analytics software using elements of AI for the impact on imaging departments when someone calls in sick, or if patient volumes increase. Vital showed a similar work-in-progress predictive analytics software for imaging equipment utilization. Others, including several analytics companies and startups, showed software that uses AI to quickly sift through massive amounts of big data or offer immediate clinical decision support for appropriate use criteria, the best test or imaging to make a diagnosis or even offer differential diagnoses.  

Philips uses AI as a component of its new Illumeo software with adaptive intelligence, which automatically pulls in related prior exams for radiology. The user can click on an area of the anatomy in a specific MPI view, and AI will find and open prior imaging studies to show the same anatomy, slice and orientation. For oncology imaging, with a couple clicks on the tumor in the image, the AI will perform an automated quantification and then perform the same measures on the priors, presenting a side-by-side comparison of the tumor assessment. This can significantly reduce the time involved with tumor tracking assessment and speed workflow.  

Read the blog about AI at HIMSS 2017 "Two Technologies That Offer a Paradigm Shift in Medicine at HIMSS 2017."

 

AI is Elementary to Watson

IBM Watson has been cited for the past few years as being in the forefront of medical AI, but has yet to commercialize the technology. Some of the first versions of work-in-progress software were shown at HIMSS by partner vendors Agfa and Siemens. Agfa showed an impressive example of how the technology works. A digital radiography (DR) chest X-ray exam was called up and Watson reviewed the image and determined the patient had small-cell lung cancer and evidence of both lung and heart surgery. Watson then searched the picture archiving and communication system (PACS), EMR and departmental reporting systems to bring in:

  • Prior chest imaging studies;
  • Cardiology report information;
  • Medications the patient is currently taking;
  • Patient history relevant to them having COPD and a history of smoking that might relate to their current exam;
  • Recent lab reports;
  • Oncology patient encounters including chemotherapy; and
  • Radiation therapy treatments.

When the radiologist opens the study, all this information is presented in a concise format and greatly enhances the picture of this patient’s health. Agfa said the goal is to improve the radiologist’s understanding of the patient to improve the diagnosis, therapies and resulting patient outcomes without adding more burden on the clinician. 

IBM purchased Merge Healthcare in 2015 for $1 billion, partly to get an established foothold in the medical IT market. However, the purchase also gave Watson millions of radiology studies and a vast amount of existing medical record data to help train the AI in evaluating patient data and get better at reading imaging exams. IBM Watson is now licensing its software through third-party agreements with other health IT vendors. The contracts stipulate that each vendor needs to add additional value to Watson with their own programming, not just become a reseller. Probably the most important stipulation of these new contracts is that vendors also are required to share access to all the patient data and imaging studies they have access to. This allows Watson to continue to hone its clinical intelligence with millions of new patient records.  
 

The Basics of Machine Learning

Access to vast quantities of patient data and images is needed to feed the AI software algorithms educational materials to learn from. Sorting through massive amounts of big data is a major component of how AI learns what is important for clinicians, what data elements are related to various disease states and gains clinical understanding. It is a similar process to medical students learning the ropes, but uses much more educational input than what is comprehensible by humans. The first step in machine learning software is for it to ingest medical textbooks and care guidelines and then review examples of clinical cases. Unlike human students, the number of cases AI uses to learn numbers in the millions. 

For cases where the AI did not accurately determine the disease state or found incorrect or irrelevant data, software programers go back and refine the AI algorithm iteration after iteration until the AI software gets it right in the majority of cases. In medicine, there are so many variables it is difficult to always arrive at the correct diagnosis for people or machines. However, percentage wise, experts now say AI software reading medical imaging studies can often match, or in some cases, outperform human radiologists. This is especially true for rare diseases or presentations, where a radiologist might only see a handful of such cases during their entire career. AI has the advantage of reviewing hundreds or even thousands of these rare studies from archives to become proficient at reading them and identify a proper diagnosis. Also, unlike the human mind, it always remains fresh in the computer’s mind. 

AI algorithms read medical images similar to radiologists, by identifying patterns. AI systems are trained using vast numbers of exams to determine what normal anatomy looks like on scans from CT, magnetic resonance imaging (MRI), ultrasound or nuclear imaging. Then abnormal cases are used to train the eye of the AI system to identify anomalies, similar to computer-aided detection software (CAD). However, unlike CAD, which just highlights areas a radiologist may want to take a closer look at, AI software has a more analytical cognitive ability based on much more clinical data and reading experience that previous generations of CAD software. For this reason, experts who are helping develop AI for medicine often refer to the cognitive ability as “CAD that works.”

   

AI All Around Us and the Next Step in Radiology

Deep learning computers are already driving cars, monitoring financial data for theft, able to translate languages and recognize people’s moods based on facial recognition, said Keith Dreyer, DO, Ph.D., vice chairman of radiology computing and information sciences at Massachusetts General Hospital, Boston. He was among the key speakers at the opening session of the 2016 Radiological Society of North America (RSNA) meeting in November, where he discussed AI’s entry into medical imaging. He is also in charge of his institution’s development of its own AI system to assist physicians at Mass General. 

“The data science revolution started about five years ago with the advent of IBM Watson and Google Brain,” Dreyer explained. He said the 2012 introduction of deep learning algorithms really pushed AI forward and by 2014 the scales began to tip in terms of machines reading radiology studies correctly, reaching around 95 percent accuracy.

Dreyer said AI software for imaging is not new, as most people already use it on Facebook to automatically tag friends the platform identities using facial recognition algorithms. He said training AI is a similar concept, where you can start with showing a computer photos of cats and dogs and it can be trained to determine the difference after enough images are used. 

AI requires big data, massive computing power, powerful algorithms, broad investments and then a lot of translation and integration from a programming standpoint before it can be commercialized, Dreyer said. 

From a radiology standpoint, he said there are two types of AI. The first type that is already starting to see U.S. Food and Drug Administration approval is for quantification AI, which only requires a 510(k) approval. AI developed for clinical interpretation will require FDA pre-market approval (PMA), which involves clinical trials.

Before machines start conducting primary or peer review reads, Dreyer said it is much more likely AI will be used to read old exams retrospectively to help hospitals find new patients for conditions the patient may not realize they have. He said about 9 million Americans qualify for low-dose CT scans to screen them for lung cancer. He said AI can be trained to search through all the prior chest CT exams on record in the health system to help identify patients that may have lung cancer. This type of retrospective screening may apply to other disease states as well, especially if the AI can pull in genomic testing results to narrow the review to patients who are predisposed to some diseases. 

He said overall, AI offers a major opportunity to enhance and augment radiology reading, not to replace radiologists. 

“We are focused on talking into a microphone and we are ignoring all this other data that is out there in the patient record,” Dreyer said. “We need to look at the imaging as just another source of data for the patient.” He said AI can help automate qualification and quickly pull out related patient data from the EMR that will aid diagnosis or the understanding of a patient’s condition.  

Watch a VIDEO interview with Eliot L. Siegel, M.D., Dwyer Lecturer; Closing Keynote Speaker, Vice Chair of Radiology at the University of Maryland and the Chief of Radiology for VA Maryland Healthcare System, talks about the current state of the industry in computer-aided detection and diagnosis at SIIM 2016. 

Read the blog “How Intelligent Machines Could Make a Difference in Radiology.”


          Elements of Statistical Learning - Chapter 2 Solutions   

The Stanford textbook Elements of Statistical Learning by Hastie, Tibshirani, and Friedman is an excellent (and freely available) graduate-level text in data mining and machine learning. I'm currently working through it, and I'm putting my (partial) exercise solutions up for anyone who might find them useful. The first set of solutions is for Chapter 2, An Overview of Supervised Learning, introducing least squares and k-nearest-neighbour techniques.

Exercise Solutions

See the solutions in PDF format (source) for a more pleasant reading experience. This webpage was created from the LaTeX source using the LaTeX2Markdown utility - check it out on GitHub.

Overview of Supervised Learning

Exercise 2.1

Suppose that each of $K$-classes has an associated target $t_k$, which is a vector of all zeroes, except a one in the $k$-th position. Show that classifying the largest element of $\hat y$ amounts to choosing the closest target, $\min_k \| t_k - \hat y \|$ if the elements of $\hat y$ sum to one.

Proof

The assertion is equivalent to showing that \begin{equation} \text{argmax}_i \hat y_i = \text{argmin}_k \| t_k - \hat y \| = \text{argmin}_k \|\hat y - t_k \|^2 \end{equation} by monotonicity of $x \mapsto x^2$ and symmetry of the norm.

WLOG, let $\| \cdot \|$ be the Euclidean norm $\| \cdot \|_2$. Let $k = \text{argmax}_i \hat y_i$, with $\hat y_k = \max y_i$. Note that then $\hat y_k \geq \frac{1}{K}$, since $\sum \hat y_i = 1$.

Then for any $k' \neq k$ (note that $y_{k'} \leq y_k$), we have \begin{align} \| y - t_{k'} \|_2^2 - \| y - t_k \|_2^2 &= y_k^2 + \left(y_{k'} - 1 \right)^2 - \left( y_{k'}^2 + \left(y_k - 1 \right)^2 \right) \\ &= 2 \left(y_k - y_{k'}\right) \\ &\geq 0 \end{align} since $y_{k'} \leq y_k$ by assumption.

Thus we must have

\begin{equation} \label{eq:6} \text{argmin}_k \| t_k - \hat y \| = \text{argmax}_i \hat y_i \end{equation}

as required.

Exercise 2.2

Show how to compute the Bayes decision boundary for the simulation example in Figure 2.5.

Proof

The Bayes classifier is \begin{equation} \label{eq:2} \hat G(X) = \text{argmax}_{g \in \mathcal G} P(g | X = x ).
\end{equation}

In our two-class example $\textbf{orange}$ and $\textbf{blue}$, the decision boundary is the set where

\begin{equation} \label{eq:5} P(g=\textbf{blue} | X = x) = P(g =\textbf{orange} | X = x) = \frac{1}{2}. \end{equation}

By the Bayes rule, this is equivalent to the set of points where

\begin{equation} \label{eq:4} P(X = x | g = \textbf{blue}) P(g = \textbf{blue}) = P(X = x | g = \textbf{orange}) P(g = \textbf{orange}) \end{equation}

As we know $P(g)$ and $P(X=x|g)$, the decision boundary can be calculated.

Exercise 2.3

Derive equation (2.24)

Proof

TODO

Exercise 2.4

Consider $N$ data points uniformly distributed in a $p$-dimensional unit ball centered at the origin. Show the the median distance from the origin to the closest data point is given by \begin{equation} \label{eq:7} d(p, N) = \left(1-\left(\frac{1}{2}\right)^{1/N}\right)^{1/p} \end{equation}

Proof

Let $r$ be the median distance from the origin to the closest data point. Then \begin{equation} \label{eq:8} P(\text{All $N$ points are further than $r$ from the origin}) = \frac{1}{2} \end{equation} by definition of the median.

Since the points $x_i$ are independently distributed, this implies that \begin{equation} \label{eq:9} \frac{1}{2} = \prod_{i=1}^N P(\|x_i\| > r) \end{equation} and as the points $x_i$ are uniformly distributed in the unit ball, we have that \begin{align} P(\| x_i \| > r) &= 1 - P(\| x_i \| \leq r) \\ &= 1 - \frac{Kr^p}{K} \\ &= 1 - r^p \end{align}

Putting these together, we obtain that \begin{equation} \label{eq:10} \frac{1}{2} = \left(1-r^p \right)^{N}
\end{equation} and solving for $r$, we have \begin{equation} \label{eq:11} r = \left(1-\left(\frac{1}{2}\right)^{1/N}\right)^{1/p} \end{equation}

Exercise 2.5

Consider inputs drawn from a spherical multivariate-normal distribution $X \sim N(0,\mathbf{1}_p)$. The squared distance from any sample point to the origin has a $\chi^2_p$ distribution with mean $p$. Consider a prediction point $x_0$ drawn from this distribution, and let $a = \frac{x_0}{\| x_0\|}$ be an associated unit vector. Let $z_i = a^T x_i$ be the projection of each of the training points on this direction.

Show that the $z_i$ are distributed $N(0,1)$ with expected squared distance from the origin 1, while the target point has expected squared distance $p$ from the origin.

Hence for $p = 10$, a randomly drawn test point is about 3.1 standard deviations from the origin, while all the training points are on average one standard deviation along direction a. So most prediction points see themselves as lying on the edge of the training set.

Proof

Let $z_i = a^T x_i = \frac{x_0^T}{\| x_0 \|} x_i$. Then $z_i$ is a linear combination of $N(0,1)$ random variables, and hence normal, with expectation zero and variance

\begin{equation} \label{eq:12} \text{Var}(z_i) = \| a^T \|^2 \text{Var}(x_i) = \text{Var}(x_i) = 1 \end{equation} as the vector $a$ has unit length and $x_i \sim N(0, 1)$.

For each target point $x_i$, the squared distance from the origin is a $\chi^2_p$ distribution with mean $p$, as required.

Exercise 2.6

  1. Derive equation (2.27) in the notes.
  2. Derive equation (2.28) in the notes.

Proof

  1. We have \begin{align} EPE(x_0) &= E_{y_0 | x_0} E_{\mathcal{T}}(y_0 - \hat y_0)^2 \\ &= \text{Var}(y_0|x_0) + E_{\mathcal T}[\hat y_0 - E_{\mathcal T} \hat y_0]^2 + [E_{\mathcal T} - x_0^T \beta]^2 \\ &= \text{Var}(y_0 | x_0) + \text{Var}_\mathcal{T}(\hat y_0) + \text{Bias}^2(\hat y_0). \end{align} We now treat each term individually. Since the estimator is unbiased, we have that the third term is zero. Since $y_0 = x_0^T \beta + \epsilon$ with $\epsilon$ an $N(0,\sigma^2)$ random variable, we must have $\text{Var}(y_0|x_0) = \sigma^2$. The middle term is more difficult. First, note that we have \begin{align} \text{Var}_{\mathcal T}(\hat y_0) &= \text{Var}_{\mathcal T}(x_0^T \hat \beta) \\ &= x_0^T \text{Var}_{\mathcal T}(\hat \beta) x_0 \\ &= E_{\mathcal T} x_0^T \sigma^2 (\mathbf{X}^T \mathbf{X})^{-1} x_0 \end{align} by conditioning (3.8) on $\mathcal T$.
  2. TODO

Exercise 2.7

Consider a regression problem with inputs $x_i$ and outputs $y_i$, and a parameterized model $f_\theta(x)$ to be fit with least squares. Show that if there are observations with tied or identical values of $x$, then the fit can be obtained from a reduced weighted least squares problem.

Proof

This is relatively simple. WLOG, assume that $x_1 = x_2$, and all other observations are unique. Then our RSS function in the general least-squares estimation is

\begin{equation} \label{eq:13} RSS(\theta) = \sum_{i=1}^N \left(y_i - f_\theta(x_i) \right)^2 = \sum_{i=2}^N w_i \left(y_i - f_\theta(x_i) \right)^2 \end{equation}

where \begin{equation} \label{eq:14} w_i = \begin{cases} 2 & i = 2 \\ 1 & \text{otherwise} \end{cases} \end{equation}

Thus we have converted our least squares estimation into a reduced weighted least squares estimation. This minimal example can be easily generalised.

Exercise 2.8

Suppose that we have a sample of $N$ pairs $x_i, y_i$, drawn IID from the distribution such that \begin{align} x_i \sim h(x), \\ y_i = f(x_i) + \epsilon_i, \\ E(\epsilon_i) = 0, \\ \text{Var}(\epsilon_i) = \sigma^2. \end{align} We construct an estimator for $f$ linear in the $y_i$, \begin{equation} \label{eq:16} \hat f(x_0) = \sum_{i=1}^N \ell_i(x_0; \mathcal X) y_i \end{equation} where the weights $\ell_i(x_0; X)$ do not depend on the $y_i$, but do depend on the training sequence $x_i$ denoted by $\mathcal X$.

  1. Show that the linear regression and $k$-nearest-neighbour regression are members of this class of estimators. Describe explicitly the weights $\ell_i(x_0; \mathcal X)$ in each of these cases.
  2. Decompose the conditional mean-squared error \begin{equation} \label{eq:17} E_{\mathcal Y | \mathcal X} \left( f(x_0) - \hat f(x_0) \right)^2 \end{equation} into a conditional squared bias and a conditional variance component. $\mathcal Y$ represents the entire training sequence of $y_i$.
  3. Decompose the (unconditional) MSE \begin{equation} \label{eq:18} E_{\mathcal Y, \mathcal X}\left(f(x_0) - \hat f(x_0) \right)^2 \end{equation} into a squared bias and a variance component.
  4. Establish a relationship between the square biases and variances in the above two cases.

Proof

  1. Recall that the estimator for $f$ in the linear regression case is given by \begin{equation} \label{eq:19} \hat f(x_0) = x_0^T \beta \end{equation} where $\beta = (X^T X)^{-1} X^T y$. Then we can simply write \begin{equation} \label{eq:20} \hat f(x_0) = \sum_{i=1}^N \left( x_0^T (X^T X)^{-1} X^T \right)_i y_i. \end{equation} Hence \begin{equation} \label{eq:21} \ell_i(x_0; \mathcal X) = \left( x_0^T (X^T X)^{-1} X^T \right)_i. \end{equation} In the $k$-nearest-neighbour representation, we have \begin{equation} \label{eq:22} \hat f(x_0) = \sum_{i=1}^N \frac{y_i}{k} \mathbf{1}_{x_i \in N_k(x_0)} \end{equation} where $N_k(x_0)$ represents the set of $k$-nearest-neighbours of $x_0$. Clearly, \begin{equation} \label{eq:23} \ell_i(x_0; \mathcal X) = \frac{1}{k} \mathbf{1}_{x_i \in N_k(x_0)} \end{equation}

  2. TODO

  3. TODO
  4. TODO

Exercise 2.9

Compare the classification performance of linear regression and $k$-nearest neighbour classification on the zipcode data. In particular, consider on the 2's and 3's, and $k = 1, 3, 5, 7, 15$. Show both the training and test error for each choice.

Proof

Our implementation in R and graphs are attached.

Exercise 2.10

Consider a linear regression model with $p$ parameters, fitted by OLS to a set of trainig data $(x_i, y_i)_{1 \leq i \leq N}$ drawn at random from a population. Let $\hat \beta$ be the least squares estimate. Suppose we have some test data $(\tilde x_i, \tilde y_i)_{1 \leq i \leq M}$ drawn at random from the same population as the training data. If $R_{tr}(\beta) = \frac{1}{N} \sum_{i=1}^N \left(y_i \beta^T x_i \right)^2$ and $R_{te}(\beta) = \frac{1}{M} \sum_{i=1}^M \left( \tilde y_i - \beta^T \tilde x_i \right)^2$, prove that \begin{equation} \label{eq:15} E(R_{tr}(\hat \beta)) \leq E(R_{te}(\hat \beta)) \end{equation} where the expectation is over all that is random in each expression.


          Netvibes: a true entrepreneur school !   


It was a few months that this Netviber Experience blog was sleeping, when, just a few days ago, was launched the new start-up of Tariq Krim, the Founder and former CEO of Netvibes : Jolicloud.com, the EasySimple cloud-based netbook Os.

As I am very interested in the subject, I checked for Tariq Krim's new Team and I was not surprised to see working with him 2 former interns at Netvibes, the best Personalized Startpage.

That gave me the idea to check "What the other Netvibes team members have become ?"

And after a few searches and emails here is the result:

First, the team running Netvibes after his departure, is still the one he hired : Freddy Mini (CEO) , Annabelle Malherbe (CFO), Franck Mahon (VP Product) Florent Solt (CTO) , Maurice Sway (Lead Designer) , Stefan Lecher and Chris Damson (Business development).
Second, what strikes me the most is that Tariq Krim is not only one of the best Web talent-scout but also that his presence, his visions and charisma have produced what I would call an "European start-up syndrome" similar at what you can see in the Silicon Valley.
Just check the list of Former Netvibes' team members or interns, that have spread out of Netvibes and you will agree with me !

This list is in no particular order and if somebody knows more, please let me know ;-)

"Génération Netvibes" :
the entrepreneurs vibes,

- Romain Huet,
Co-Founder of JoliCloud with Tariq Krim
Lead Developer
http://www.jolicloud.com

- Hubert Michaux,
Co-founder & CEO at HelloCoton
http://www.Hellocoton.fr

Victor Cerutti,
Co-founders & CTO at HelloCoton
http://www.Hellocoton.fr

- François Hodierne,
Founder at h6e and ladistribution
http://www.ladistribution.net
http://h6e.net
Founder at Blogmarks.net
http://blogmarks.net

- Marc Thouvenin,
Founder and CEO of Regioneo
http://www.regioneo.com

- Alexander Kirk,
Founder of Factolex (Austria)
http://www.factolex.com

- Gang Lu,
Co-Founder of OpenWeb Asia
http://www.openweb.asia

- Colin Romain,
Founder and CEO of Fubiz
http://www.fubiz.net/

- Florent Gibeaud,
Co-Founder of Investside and Keo Networks
http://www.investside.com/

- Christophe Dufour,
Co-Founder and CEO of Investside
http://www.investside.com/
Co-Founder and CEO of Keo Networks
http://www.keonetworks.com/

- Aurélien Faches,
Co-Founder of Aaaliens
http://aaaliens.com/

- Michael Cohen,
Founder and CEO of a new company (if anyone has the name let me know)

- Jean Francois Groff,
CTO at Fairtilizer and founder at Vizta
http://fairtilizer.com

UPDATE:
- Nicolas Dangler, mixin.Com
Co-Founder of Mixin.com
http://Mixin.com


I also tracked the others who left:

- Antoine Marguerie,
Lead Designer at Fairtilizer
http://fairtilizer.com

- Laure Chouillou ,
Communication Manager at Forum Netexplorateur

- François Bureau,
Support & QA Manager at Citiesxl
http://www.citiesxl.com

- Tristan Groléat,
Intern at Jolicloud
http://www.jolicloud.com


So, 13 14 people left Netvibes to create their own company and become an entrepreneur, 4 are in the internet start-up business and, as far as I know, the rest of the team is still in place,

Preparing this list, I also thought at all of those who were influenced by Tariq Krim' Netvibes approach and, yes, I admit it, I was among them !

First, I just launched a new blog about ... Jolicloud , called : JoliCloud Experience !

But that is not all, I am also preparing to launch a new Web Venture with my partner Kim : 1PRISM.com

So, Thanks Tariq ! And, Who is Next ? ;-)

          ... and Netviber Experience' (NetEx) blog will stop here !   
Hi to all my readers,

When I started this Netviber Experience' blog in December 2006, Netvibes was a young 1 year old start-up.

Now Netvibes has turned 3 and is a "mature company" ;-).

I think that this blog has completed its mission and I have decided that this will be my last post here !

Don't be sad, I will continue to participate with my own posts in Netvibes itself.

Just follow me from my Universe: http://www.netvibes.com/netviberexperience .
(click on my profile and add me to your friends. It will be a pleasure for me :-)

Then I will also continue to post on twitter, friendfeed, facebook, ...

So thank you to all and...

Arrivederci !




          Happy Brithday, Netvibes !!! 3 Years !!!   

Time flies !!

Jouyeux Anniversaire !!!

Buon Conpleanno !!!

Happy Birthday !!!
          Netvibes never closes, but this NetEx Blog does...   

... so I will take 2 weeks off. No, do not cry ! I will be back :-)

And do you want to know what are the books I'll take with me ? Well, just check the cover here. It was spotted my 5 years old daughter among millions of books.

And It is just what I was looking for: another vision on our everchanging life

Microtrends: Surprising Tales of the Way We Live Today
by Mark J. Penn, E. Kinney Zalesne

It has good reviews and looks as good bet for a summer reading with kids around. I'll let you know...

And I will bring also The Best Seller "Millenium" novel series by STIEG LARSSON.

All I read about it is this : "An epic tale of serial murder and corporate trickery spanning several continents, the novel takes in complicated international financial fraud and the buried evil past of a wealthy Swedish industrial family."

Please to not tell me how it ends :) but feel free to leave me your own reading suggestions !
             
All It Takes To Inflate Your FeedBurner Numbers Is a Netvibes Account . Check http://ping.fm/mSveK
             
Confirmed Water on Mars: Biggest science news of the decade ?
          Confirmed Water on Mars: that is gretest news of the decade !   
but nobody is reacting :( http://ping.fm/27413
          New Netvibes Buzz is here   
Fast and set on 48 hours http://ping.fm/Z7TQc . Would love a Weekly and Montly ranking if on holidays;)
          New Netvibes Activities board is super fast   
try http://ping.fm/nwi68 Suggestion?
I would like to be able to Star them here
             
want also to be a Beta tester for http://ping.fm ? Use this code: pingadactyl FAST , will not last long ;)
             
posting from Ping.fm, works also on Blogger
          testing ping.fm   
I am posting in one click the same update and micro-blogging to all my social services: FB, FF, Identi.ca, Twitter ...
          http://www.netvibes.com/buzz is down : Day 2.    
Any news??
          My worst nightmare ? My Netvibes page invaded by smileys ;   
check out this awful Netvibes's clone : smileycentral
          Anyone already on identi.ca (OpenSource Twitter-Like) ? Join me!   
http://identi.ca/netviber/
          Best idea for Summer 2008: TechCrunch's Web Tablet For $200   
Nik Cubrilovic and Michael Arrington of tech blog TechCrunch.com have launched what I found the most interesting idea for this Summer: "Today at Techcrunch we announced that we are building our own web tablet hardware device:
A web Tablet for just 200$"

"Here’s the basic idea: The machine is as thin as possible, runs low end hardware and has a single button for powering it on and off, headphone jacks, a built in camera for video, low end speakers, and a microphone. It will have Wifi, maybe one USB port, a built in battery, half a Gigabyte of RAM, a 4-Gigabyte solid state hard drive. Data input is primarily through an iPhone-like touch screen keyboard. It runs on linux and Firefox. It would be great to have it be built entirely on open source hardware, but including Skype for VOIP and video calls may be a nice touch, too."

The Best part of it is the TOUCH SCREEN and also the collaborative side of the project:

"The goal is to keep the machine very simple and very cheap. I think this will be a lot of fun, and it may just turn into an actual product that we can use to surf the web and talk to our friends.

We’ll be coordinating the project over at TechCrunchIT. Leave a comment there if you want to participate and we’ll be in touch soon."

I like the idea of this low-profile screen that could be in every kitchen and every shop.

More details also here.
          I prefer the new Facebook user profile style   
check it here http://www.new.facebook.com/home.php
          Do you know your Netvibes' user ID number ? Check it out!   
mine is: 16994722

Just go here http://www.netvibes.com/activities ,
click on your username in the menu,
go bottom of the page and click on JSON icon there check for the first "userId": the number is your own User ID.
          Where are the Netvibes' Communities ?   
still little buzz, no ? Why ?
          NetEx blog goes also mobile: http://netviberexperience.mofuse.mobi/   

I just launched the mobile version of Netviber Experience blog (NetEx) check it out here:

http://netviberexperience.mofuse.mobi/

You will be able to check ideas, tips and data mining on Netvibes from your mobile phone (wap) or iphone ;)

As you can see it is powered by MOFUSE

"MOFUSE gives content publishers, like bloggers, the ability to publish their content to the mobile web. You don't have to be a content publisher to use MoFuse though, anyone can create a mobile website in just a few minutes using the intuitive MoFuse platform."
AND IT SI FREE !

MOFUSE powers already:

Mashable MobileRead Write Web Mobile

LiveSide MobileMakeUseOf Mobile

Harvard Business MobileCOOL HUNTING Mobile

Political Wire MobileReadBurner Mobile


          I miss the possibility to update my Netvibes status from my wap mobile   
With the new 3G iphone coming, I wanted to test the mobile versions of Netvibes.

I know that Netvibes' strategy is not focusing on the mobile platform contrary to other competitors like webwag. Nevertheless, when I travelling, I like the idea to be able to check my Netvibes page on my mobile.

I can read feeds and twitts and check and write emails with gmail. And my web browsing is limited to wap compatible websites

But now, with the new open Activities policy by Netvibes, I can also read the feed about my netvibes' friends' Activity, my own activity and the public stream.

What I miss the most is the possibility to update my Netvibes status from my mobile. I miss it because I can update my twitter from my mobile Netvibes but not my own Netvibes activity :(

To go mobile with Netvibes, I found 3 different urls:

http://m.netvibes.com

http://wap.netvibes.com (the lightweight "basic" version)

and

http://iphone.netvibes.com

I presume that the first 2 give the same results, as it looks like that on my SonyEricsson.

The iphone is reserved for ... well the iphone.
          new Buzz and Activities page now officialy available for netvibes   

A new green Button: Browse all Activities is now available in your Activities Menu. That way, the buzz and other Activities that I announced are officilay made available to all users.

By clicking the Green Button you will open the new Activities page but that will replace your Netvibes page.

I would prefere to open it in a new window or new tab...

With the Activities page you will be able to follow the Buzz, the public stream and your own your Friends activities.

Personnaly, I would prefer to have a new Buzz icon in the Upper menu to check first the buzz but also the other activities, possibly in a new tab/page and /or in the side bar ;)
          new Netvibes' Share option to customize and export UWA Widgets   


In a very rich week full of new features (Google search box, Buzz and Public stream) Netvibes will also offering a new Share option to customize the apparence of UWA Widgets.

As the all ecosystem will be rebuilt with new features (n° of installations, platform % and weekly trends) and a better search :), a new option is already offered under the Share menu in the ecosystem bottom menu line.

With this you will be able to export the widget you like (here is mine: the NetEx blog ) with a few customizations and preferences:

  • title of widget
  • height
  • width
  • border colour
  • ...

Very interesting is also the GRAB THE WIDGET CODE window: with a copy/paste you are now able to export any UWA widget and place it on any WEB page (except the ones like Blogger who prevent javascript) !

The applications are with no limits!! The widget revolution is coming ?

Thanks Netvibes!!




          Have you seen the Netvibes sidebar option for your Friends' activities ?   
UPDATE: I have changed capture to show that I can follow the Netvibes activities also watching another page... eh eh :) Would be interesting to have an option for showing or not the comments ...


When you are in the new Activities Dashboard, http://www.netvibes.com/public
select the Friends tab and in the menu at the bottom of the page you click on sidebar.

This will install a bookmarklet that is a special bookmark that when you will choose it in your Bookmarks tab of your Browser will open a sidebar, on any page you are looking at, showing your last friends' activities.

It is interesting if you are waiting for an important news and you have good friends :)

There is a 4 choice tab in the sidebar: friends, me, buzz and everyone so you can use it even if you do not have already many friends on Netvibes.

I do not know what is the refreshing rate of the sidebar ...

Works fine with FireFox 3 but I have not tested with IE...

I like it ! Thanks NTVB Team !

You can also see your activities and your friends'activities with RSS or ATOM Feeds, or even a special widget just by clicking on the icons also on the bottom page menu.
          With new Netvibes Buzz you will see not just how many shared the news but also who. I like it!   
With the new Netvibes'Buzz Dashboard that you can check here : http://www.netvibes.com/buzz there is a new nice feature.

When you click on the publishing date of every buzz, you are shown with all the users that starred that activity with their comments. Not only your friends but all Netvibes users.

If one buzz was started by you, like the one in the capture it will allow me to see who also shared it and perhaps want to add as new friends some unknown user.

That is the start of a true Netvibes Community

I like that feature.

If you click on the user name, you will see all his activity.
          (USA-WA-Redmond) Software Engineering Manager   
Microsoft Edge is our new flagship web browser -- designed for Windows 10 devices and the modern web. For 500 million Windows 10 users, the browser is integral to their lives at work, school, and home. People spend over 50% of their time in Windows in a browser, and Edge is the most used first-party app on Windows 10. Today Edge has over 300 million monthly active users (MAU) and growing. Microsoft Edge is designed to be fast, reliable, battery-efficient, and secure, with differentiated experiences in common web activities like reading, research, commerce, notetaking, and multi-tasking across multiple sites. Edge is at the center of the Windows experience and increasingly a core driver of Windows business results via Bing. Our team is an integrated studio of PMs, Designers, Developers, and Data Engineers. We have a deep love for the web and the connectedness it enables among people around the globe. We are a high-function, high-ambition team that is interconnected with all of the major groups across the company. We believe strongly that only by building a great team and culture will we be able to deliver differentiated experiences that customers value highly enough to choose Microsoft Edge. We are looking for an exceptional Engineering Manager to lead one of our engineering team. This role is highly complex and highly challenging as we evolve the culture of the company and the organization to be customer obsessed and data-driven in everything we do. In conjunction with the rest of the leadership team, the engineering manager is responsible for clearly understanding market dynamics, partnering closely with teams across the company for shared success, setting an overall strategy for how we evolve our engineering approach, leading the engineering team through the project, and, most importantly, being a terrific manager and cultural leader within our team and across the company. As manager of the organization, you will be responsible for: leading the organization to be customer-obsessed and ready to meet the growing demands, defining the strategy and vision for success of evolving organization as well as Microsoft Edge product. Leading the team to embrace data-driven engineering and agile methodologies. Partnering with engineering teams across Windows and Microsoft to define, measure and monitor the key performance metrics that enable us to deliver the best in class web browser on Windows (including Xbox and Hololens) as a Universal Windows Platform (UWP) app that users choose and love. Continuously raising the quality of Microsoft Edge flights and releases by defining the flighting and release criteria. Identifying the next set of innovation that will address user needs and drive consumer adoption. Understanding and driving adoption of Microsoft Edge in enterprise and education segments through targeted investments. Gathering and analyzing usage data and business metrics to drive insights about our customers, drive prioritization and decision making, and drive product improvement. A successful candidate will have the following qualifications: • BS or MS in Computer Science or engineering field • 10+ years of experience with leading/managing people with a proven track record of building strong teams • 15+ years designing and developing software with a proven track record as a technical leader on complex projects • Passion for the web, Windows, and delivering excellent user experiences • Curious, relentless, get-it-done, positive attitude • Ability to motivate and set vision in the face of a high degree of ambiguity • Exceptional communication and interpersonal skills with a customer focus • Strong technical engineering, design, coding, testing and/or debugging skills • Ability to collaborate and drive quality across the team and disciplines (Design, PM, Dev, Marketing, etc.) • Strategic mindset and a demonstrated ability to see and act within the bigger picture Experience with any of the following is a major plus: • Data Engineering; Data Science, Data Mining & Analytics • Direct customer engagement • PowerBI and SQL, including database design and query optimization • Web technologies, including HTML, CSS, and JavaScript • Networking technologies, Graphics, Windows programming and OS fundamentals. • Working on v1 products or startup • Demonstrated experience shipping products at Microsoft Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to askstaff@microsoft.com. Development (engineering)
          (USA-WA-Redmond) Senior Program Manager   
Do you want to influence every Azure engineer and be part of the team building this planet's second largest public cloud? If this catches your eye, then we have a role just for you. In today's Cloud first marketplace, Microsoft has millions of customers across the world that leverage the Microsoft Cloud. To succeed, Microsoft's strategy for delivering great customer experiences requires smart engineers who hit the ground running and make near immediate impact using their one-of-a-kind passion for technology. If you want to influence and educate 1000’s of engineers on Azure technology, we've got the opportunity of a lifetime for you! The Azure team is looking for an Azure technology specialist with extensive training experience. This role requires the mastery of building visually compelling presentations and delivery of them to other engineers. The primary responsibility is educating and teaching Azure to Microsoft employees, it requires a unique depth in the Azure technology. The ideal candidate is to be a full-time technical Trainer that will deliver top-notch new hire training for Engineer and working on the Azure platform and 1st party services. This position is a rare and fantastic opportunity for a highly competent, well-versed, seasoned individual that has a broad and deep understanding of the Azure platform and 1st party service offerings and can build, articulate, and deliver a 5-day training course to onboard new Developers and Program Managers to our products and culture. Our Azure Engineering Boot Camp training program consists of a one-day quarterly Business & Technical Overview, as well as a 5-day Sessions + Hands-On Labs training experience that walks new hires through learning Azure and using our tools/services to build Azure, or build on top of Azure. Our Hands-On Labs covers the Azure platform, tools and services that are used on a daily basis to build, validate, geo-replicate and release Azure and 1st party Azure Services on a global scale. Your top goal will be to accelerate on-boarding and ramp time for new hires, to schedule, track, and facilitate classroom learning, and ensure swift adoption of Azure, our tools/services and releasing to the cloud. Success in this role means you have contributed to building and delivering a world-class learning and development program that provides new hires a fantastic, positive learning experience and enables them to get familiar with our platform and services early in their career at rapid speed. Primary Responsibilities: - Facilitate live classroom instruction to deliver session-based and hands-on training on topics such as: • Intro to the Cloud, App Services, and Service Fabric • PaaS, IaaS, and SaaS in Azure and Azure Security • Hybrid Connectivity & Networking • ARM and Containers • Customers Use Cases & Cloud Design Patterns - Develop and incorporate new content of emerging technologies, tools, features and services - Analyze and incorporate attendee feedback to continually improve learning program experience - Iterate on Key Learnings, identify and quickly adapt to changing program needs - Coordinate and data mine new learning opportunities with SMEs within C&E and Microsoft - Leverage local SMEs tribal knowledge and spread/promote adoption of best practices - Assess target audience, impact, relevance and measure success of program Qualification: • 5+ years of Project or Program Management • Exceptional written and interpersonal skills • Excellent cross-group collaboration • Team player that contributes to and support strategic decisions to grow the program • Is energetic, enthusiastic, supportive, motivating and promotes a positive learning environment • Communicates effectively and concisely with stakeholders and leaders about our program and objectives • Understands and can articulate learning objectives, the relevant impact to our audience and how our objectives align to our all-up business goals and Microsoft “growth mindset” vision • Displays structured thinking, planning, and delivery of curriculum • Able to execute by working through others, influence without authority, and deal with ambiguity If you have a natural ability to excite others, can lead engineers to learn how to launch services to the Cloud, are passionate about improving engineering productivity across Azure and C+E, and are eager to play with the latest and greatest industry standards and technologies, then we want you! Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to askstaff@microsoft.com. Program management (engineering)
          (USA-WA-Redmond) Senior Program Manager   
Do you want to influence every Azure engineer and be part of the team building this planet's second largest public cloud? If this catches your eye, then we have a role just for you. In today's Cloud first marketplace, Microsoft has millions of customers across the world that leverage the Microsoft Cloud. To succeed, Microsoft's strategy for delivering great customer experiences requires smart engineers who hit the ground running and make near immediate impact using their one-of-a-kind passion for technology. If you want to influence and educate 1000’s of engineers on Azure technology, we've got the opportunity of a lifetime for you! The Azure team is looking for an Azure technology specialist with extensive training experience. This role requires the mastery of building visually compelling presentations and delivery of them to other engineers. The primary responsibility is educating and teaching Azure to Microsoft employees, it requires a unique depth in the Azure technology. The ideal candidate is to be a full-time technical Trainer that will deliver top-notch new hire training for Engineer and working on the Azure platform and 1st party services. This position is a rare and fantastic opportunity for a highly competent, well-versed, seasoned individual that has a broad and deep understanding of the Azure platform and 1st party service offerings and can build, articulate, and deliver a 5-day training course to onboard new Developers and Program Managers to our products and culture. Our Azure Engineering Boot Camp training program consists of a one-day quarterly Business & Technical Overview, as well as a 5-day Sessions + Hands-On Labs training experience that walks new hires through learning Azure and using our tools/services to build Azure, or build on top of Azure. Our Hands-On Labs covers the Azure platform, tools and services that are used on a daily basis to build, validate, geo-replicate and release Azure and 1st party Azure Services on a global scale. Your top goal will be to accelerate on-boarding and ramp time for new hires, to schedule, track, and facilitate classroom learning, and ensure swift adoption of Azure, our tools/services and releasing to the cloud. Success in this role means you have contributed to building and delivering a world-class learning and development program that provides new hires a fantastic, positive learning experience and enables them to get familiar with our platform and services early in their career at rapid speed. Primary Responsibilities: Facilitate live classroom instruction to deliver session-based and hands-on training on topics such as: • Intro to the Cloud • PaaS, IaaS, and SaaS in Azure • Hybrid Connectivity & Networking • ARM • Containers • Intro to App Services • Intro to Service Fabric • Azure Security • Customers Use Cases & Cloud Design Patterns • Develop and incorporate new content of emerging technologies, tools, features and services • Analyze and incorporate attendee feedback to continually improve learning program experience • Iterate on Key Learnings, identify and quickly adapt to changing program needs • Coordinate and data mine new learning opportunities with SMEs within C&E and Microsoft • Leverage local SMEs tribal knowledge and spread/promote adoption of best practices • Assess target audience, impact, relevance and measure success of program Basic Qualification: • 5+ years of Project or Program Management Preferred Qualification: • Exceptional written and interpersonal skills • Excellent cross-group collaboration • Team player that contributes to and support strategic decisions to grow the program • Is energetic, enthusiastic, supportive, motivating and promotes a positive learning environment • Communicates effectively and concisely with stakeholders and leaders about our program and objectives • Understands and can articulate learning objectives, the relevant impact to our audience and how our objectives align to our all-up business goals and Microsoft “growth mindset” vision • Displays structured thinking, planning, and delivery of curriculum • Able to execute by working through others, influence without authority, and deal with ambiguity If you have a natural ability to excite others, can lead engineers to learn how to launch services to the Cloud, are passionate about improving engineering productivity across Azure and C+E, and are eager to play with the latest and greatest industry standards and technologies, then we want you! Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to askstaff@microsoft.com. Program management (engineering)
          Reasons to Oppose Common Core   

Updated Dec. 21, 2013

I put together another post of examples of Common Core Curriculum.
http://coolstuff4catholics.blogspot.com/2013/12/why-common-core.html


More links to reasons to oppose Common Core. Please check my archives under Common Core, also.

This YouTube Video give a good overview of the topic.