Updating Your Website   
When updating your website you essentially have two options: Hire a professional to update your Static website Do it yourself or have a staff member update your Dynamic website. Hire a professional web designer If you have a static website (ie. one that is not driven by a database or a content management system), you will likely...
          DIAB 6.3.44.35   
SQL Server database monitoring, troubleshooting and diagnostic software.
          Database Tour 8.2.4.33   
Cross-database tool
          Image resized on RadGrid exporting   

I have a radgrid with an export function that brings data from a database, including an additional header and text, which are document conditions. These document conditions include images, the problem is the last image that is resized to adjust to the tall of the PDF

 

Protected Sub CotDetGrid_PdfExporting(sender As Object, e As GridPdfExportingArgs) Handles CotDetGrid.PdfExporting
    Try
        colwidth = Split(CamposDeseados, ",").Count
        Dim columnasMenos = 0
        If PoscProducto.Equals("Fila") Then
            If CamposDeseados.Contains("Producto") And CamposDeseados.Contains("Observación") Then
                colwidth -= 2
                columnasMenos = 2
            ElseIf CamposDeseados.Contains("Producto") Or CamposDeseados.Contains("Observación") Then
                colwidth -= 1
                columnasMenos = 1
            End If
        Else
            If CamposDeseados.Contains("Observación") Then
                colwidth -= 1
                columnasMenos = 1
            End If
        End If
 
        colwidth = CType((555 / colwidth), Integer) 'calcular el width segun la cantidad de columnas
 
        e.RawHTML = ""
 
        e.RawHTML = "<div style='font-size:10pt;'>" & _
                        "<table width='555px' style=' margin-left:0px; margin-right:0px;' >" & _
                           "<colgroup><col/><col/></colgroup>" & _
                            "<tbody>" & _
                              "<tr>" & _
                                 "<td align='center' colspan='2'> " + Imagen + "</td>" & _
                              "</tr>" & _
                              "<tr>" & _
                                 "<td align='center' colspan='2'> " + DirTel + "</td>" & _
                              "</tr>" & _
                              "<tr>" & _
                                 "<td align='center' colspan='2'>   </td>" & _
                              "</tr>" & _
                              "<tr>" & _
                                 "<td>   </td>" & _
                                 "<td align='right'>" + FechaElaboracion + "</td>" & _
                              "</tr>" & _
                              "<tr>" & _
                                 "<td>   </td>" & _
                                 "<td align='right'>" + FechaEmision + "</td>" & _
                              "</tr>" & _
                              "<tr><td> </td><td> </td></tr>" & _
                              "<tr>" & _
                                 "<td align='left'>" + Cliente + "</td>" & _
                                 "<td align='right'>" + Moneda + "</td>" & _
                              "</tr>" & _
                              "<tr>" & _
                                 "<td align='left'>" + Saludo + ContactoNombre + "</td>" & _
                                 "<td align='right'>" + NumPartidas + "</td>" & _
                              "</tr>" & _
                              "<tr>" & _
                                 "<td align='left'>" + ContactoPuesto + "</td>" & _
                                 "<td align='right'>" + fFolio + "</td>" & _
                              "</tr>" & _
                              "<tr>" & _
                                 "<td align='left'>" + ContactoTelefono + "</td>" & _
                                 "<td>   </td>" & _
                              "</tr>" & _
                              "<tr>" & _
                                 "<td colspan='2'>   </td>" & _
                              "</tr>" & _
                              "<tr style='text-align: justify; text-justify: inter-word;'>" & _
                                 "<td colspan='2'> " + hiddenMensajeGrid.Value + " </td>" & _
                              "</tr>" & _
                           "</tbody>" & _
                        "</table>"
 
        Dim columnasPdf As Integer = CamposDeseados.Split(",").Length - columnasMenos
        Dim col = ""
 
        For i = 1 To columnasPdf Step 1
            col += "<col />"
        Next
 
        e.RawHTML = Convert.ToString(e.RawHTML) & _
                    "<table width='555px' style='font-size:8pt; margin-left:0px; margin-right:0px;white-space: nowrap;'>" & _
                       "<colgroup>" & _
                        col & _
                       "</colgroup>" & _
                       "<tbody>" & _
                          "<tr style='text-align:center; background-color:#A0A0A0; color:#FFFFFF;  font-weight: bold;'>"
        If CamposDeseados.Contains("Partida") Then
            e.RawHTML = Convert.ToString(e.RawHTML) & _
                            "<td>Partida</td>"
        End If
        If CamposDeseados.Contains("Imagen") Then
            e.RawHTML = Convert.ToString(e.RawHTML) & _
                            "<td>Imagen</td>"
        End If
        If CamposDeseados.Contains("Código") Then
            e.RawHTML = Convert.ToString(e.RawHTML) & _
                            "<td>Código</td>"
        End If
        If CamposDeseados.Contains("Producto") And PoscProducto.Equals("Columna") Then
            e.RawHTML = Convert.ToString(e.RawHTML) & _
                            "<td>Producto</td>"
        End If
        If CamposDeseados.Contains("Unidad de Medida") Then
            e.RawHTML = Convert.ToString(e.RawHTML) & _
                            "<td>Unidad de Medida</td>"
        End If
        If CamposDeseados.Contains("Precio Unitario") Then
            e.RawHTML = Convert.ToString(e.RawHTML) & _
                            "<td style='text-align:right;'>Precio Unitario</td>"
        End If
        If CamposDeseados.Contains("Cantidad") Then
            e.RawHTML = Convert.ToString(e.RawHTML) & _
                            "<td>Cantidad</td>"
        End If
        If CamposDeseados.Contains("Importe") Then
            e.RawHTML = Convert.ToString(e.RawHTML) & _
                            "<td style='text-align:right;'>Importe</td>"
        End If
        If CamposDeseados.Contains("Descuento %") Then
            e.RawHTML = Convert.ToString(e.RawHTML) & _
                            "<td style='text-align:right;'>Descuento %</td>"
        End If
        If CamposDeseados.Contains("Impuesto %") Then
            e.RawHTML = Convert.ToString(e.RawHTML) & _
                            "<td style='text-align:right;'>Impuesto %</td>"
        End If
        If CamposDeseados.Contains("Impuesto") Then
            e.RawHTML = Convert.ToString(e.RawHTML) & _
                            "<td style='text-align:right;'>Impuesto</td>"
        End If
        If CamposDeseados.Contains("Total") Then
            e.RawHTML = Convert.ToString(e.RawHTML) & _
                            "<td style='text-align:right;'>Total</td>"
        End If
        e.RawHTML = Convert.ToString(e.RawHTML) & _
                      "</tr>" & _
                        body & _
                   "</tbody>" & _
                "</table>" & _
            "</div>"
 
        e.RawHTML = Convert.ToString(e.RawHTML) & "<br/><br/><br/>"
        e.RawHTML = Convert.ToString(e.RawHTML) & "<div style='font-size:10pt;'>"
        e.RawHTML = Convert.ToString(e.RawHTML) & "<table width='555px' style='margin-left:0px; margin-right:0px;' >"
        e.RawHTML = Convert.ToString(e.RawHTML) & "<colgroup><col /></colgroup>"
        e.RawHTML = Convert.ToString(e.RawHTML) & "<tbody>"
 
        If IncluirCondiciones = 1 Then
 
            CotizacionCondiciones = CotizacionCondiciones.Replace("<p", "<div style='font-size:10pt;'")
            CotizacionCondiciones = CotizacionCondiciones.Replace("</p>", "</div><br /><p></p>")
            CotizacionCondiciones = CotizacionCondiciones.Replace("span", "div style='font-size:10pt;'")
 
            e.RawHTML = Convert.ToString(e.RawHTML) & "<tr> <td style='line-height:1;'> <p></p>" + CotizacionCondiciones + "</td> </tr>"
 
        End If
        If IncluirFirma = 1 Then
            e.RawHTML = Convert.ToString(e.RawHTML) & "<tr><td> </td></tr><tr><td> </td></tr><tr><td> </td></tr><tr><td style='text-align:center'>___________________________</td></tr><tr><

825 €/ maand - Renbaanlaan - Elsene
          Attivazione gratuita Id Card   

Attivazione gratuita Id Card su Ordine degli Architetti di Caltanissetta

Avviso agli iscritti Gent.mo Collega, ti inviamo il link da dove richiedere gratuitamente la nuova Id Card dell’Ordine degli Architetti PPC di Caltanissetta. Con la nuova Id Card potrai accedere a nuovi servizi e i tuoi dati verranno aggiornati automaticamente nel nostro database. Richiedi la tua ID CARD.   Cordialità Il Presidente Arch. Paolo lo […]

L'articolo Attivazione gratuita Id Card sembra essere il primo su Ordine degli Architetti di Caltanissetta.


          Kantoor van 137 m² te huur    

1.450 €/ maand - Erfprinslaan - Sint-Lambrechts-Woluwe
          Kantoor van 260 m² te koop   

416.000 tot 425.000 € - Wolvengracht - Brussel Vijfhoek
          Marketing / Customer Service - Smashing Cleaning Services - Dubai   
Relationship  document all action and responses in customer database. A problem solver with excellent public relations skills and an ongoing commitment to....
From Smashing Cleaning Services - Thu, 29 Jun 2017 09:10:47 GMT - View all Dubai jobs
          Kantoor van 70 m² te huur    

1.000 €/ maand - Oudergemselaan - Etterbeek
          Kantoor van 70 m² te huur    

750 €/ maand - Louizalaan - Brussel Louiza
          Kantoor van 80 m² te huur    

1.350 €/ maand - Louis Lepoutrelaan - Elsene
          Kantoor van 50 tot 150 m² te huur    

1.000 tot 1.800 €/ maand - Clémenceaulaan - Anderlecht
          Kantoor van 50 tot 193 m² te koop   

116.500 tot 449.000 € - Elsense Steenweg - Elsene
          Kantoor van 85 m² te huur    

980 €/ maand - Waterloosesteenweg - Ukkel
          Kantoor van 80 m² te huur    

850 €/ maand - Winston Churchilllaan - Ukkel
          Kantoor van 80 m² te koop   

215.000 € - Gulledelle - Sint-Lambrechts-Woluwe
          Everyone With A Name Match Will Be A Vote Fraudster   
Kobach knows this game with databases. You program them to make soft matches - similar names, party affiliation - and then accuse everyone of having a match of having voted twice. Of course this almost never ever ever ever happens and certainly doesn't happen enough to swing elections, but gotta lock up potential Dem voters any way you can.

A letter from Kris Kobach, the vice chairman of a White House commission looking into voter fraud and other irregularities, is drawing fire from some state election officials. The letter, sent Wednesday to all 50 states, requests that all publicly available voter roll data be sent to the White House by July 14, five days before the panel's first meeting.

Voter suppression has been his career for years. What a calling. He should be in jail.
          Client Service Representative   
CO-Woodland Park, Tunstall, a global leader in response center and remote patient services company seeks full time Client Services Representative. Job duties: Answering incoming phone calls for client concerns and inquires; basic troubleshooting for equipment; assist with contacting clients that are out of testing compliance, data entry for new clients into databases; generating work orders for field installers for
          Alert Notice 585: Monitoring of Evryscope targets requested for follow-up   

June 30, 2017: Dr. Octavi Fors (University of North Carolina at Chapel Hill) and colleagues have requested AAVSO observers' assistance in following up on some targets from the Evryscope survey.

Dr. Fors writes: "The Evryscope team is running a survey of transiting planetesimals around white dwarfs (WDs). To date, WD1145+017, discovered by Kepler, is the only WD found to have disintegrating transiting Ceres-mass objects. The debris/planetesimals transits candidates found by Evryscope need to be confirmed with V or R Johnson filtered, higher SNR (~50) and faster (~25-60s) cadence photometry than Evryscope. We encourage AAVSO observers to follow up these debris/planetesimals transits candidates.

"As a by-product of this survey, Evryscope is also detecting WDs and hot sub dwarfs which show variability of all kinds which we encourage to follow up too."

The first set of targets in this Evryscope-AAVSO follow-up collaboration is given in the list below. The targets will become increasingly observable over the coming weeks and months. Please add them to your programs if possible as soon as they become observable from your location.

Target                                    

R.A.(2000)   Dec(2000)   Range

Observing Interval*

Type                                
EC 01541-1409 01 56 31.90 -13 54 26.4 12.29-12.33V 4-5 (u) V361HYA (very rapidly pulsating hot sub-dwarf B star)
GD 1068 02 00 13.26 -17 28 43.8 11.95-12.13V 5-6 High proper motion star**; R (close binary with strong reflection)
HE 0218-4447 02 20 24.49 -44 33 28.5 12.89-?V 2 runs x 5 Variability type unknown
HE 0218-3437 02 20 59.77 -34 23 35.3 13.39-13.40V 2 runs x 5 ELL (rotating ellipsoidal close binary)
2MASS J05144393-0848064 05 14 43.93 -08 48 06.4 11.21-?V 4-5 (u) Hot subdwarf
ASAS J102322-3737.0 10 23 21.89 -37 36 59.9 11.61-11.83V 4 EA+R
JL 94 22 01 52.22 -75 52 04.4 13.03-?V 2 runs x 6 Hot subdwarf
EC 23073-6905 23 10 35.60 -68 49 30.5 12.59-?V 4-5 (u) Variability type unknown

*Number of hours each target should be observed in  a single night. (u) means debris-like targets which need uninterrupted photometry for 4-5 hrs.
**GD 1068 proper motion values (mas/yr) are RA -20.7, Dec -40.5

Dr. Fors adds that "Transits/dips due to debris-like/planetesimal eclipses around WDs have periods of 4-5hrs (at least in the case of WD1145+017). Thus, we would like to obtain uninterrupted (except for readout time) photometry runs of 4-5hrs imaging the same candidate. If target visibility prevents doing so, at least 2.5hr-runs would be valuable; 2 or 3 runs of 2.5 hrs each would be great."

Johnson V photometry is preferred; if V is not possible, R is okay. Remember that when you are submitting your data to the AAVSO, the letter 'R' stands for Cousins R; 'RJ' stands for Johnson R.

Cadences of 25-60 sec should allow resolution of a debris disk or period determination for the variables on the list. A SNR>50 or higher is requested for all targets.

Charts: Finder charts with comparison star sequences may be created using the AAVSO Variable Star Plotter (VSP). For GD 1068, a 'D' or 'E' scale chart will need to be created to see the comp stars. Also, observers are reminded that GD 1068 is a high proper motion star.

Submit observations: Please submit observations to the AAVSO International Database using the names given in the target list, making sure to include spaces between parts of a name as given.

This campaign is being monitored on the AAVSO Observing Campaigns webpage and is being followed on the Campaigns and Observation Reports Forum at https://www.aavso.org/evryscope-campaign-2017.

This AAVSO Alert Notice was compiled by Elizabeth O. Waagen.

----------------------------------
SUBMIT OBSERVATIONS TO THE AAVSO

Information on submitting observations to the AAVSO may be found at:
http://www.aavso.org/webobs

ALERT NOTICE ARCHIVE AND SUBSCRIPTION INFORMATION

An Alert Notice archive is available at the following URL:
http://www.aavso.org/alert-notice-archive

Subscribing and Unsubscribing may be done at the following URL:
http://www.aavso.org/observation-notification#alertnotices

-------------------------------------------------

Please support the AAVSO and its mission -- Join or donate today:
http://www.aavso.org/apps/donate/


          Putin extends embargo on products from West through 2018   
A decree signed by Putin and posted in the official government database states that the embargo will now stretch to December 31, 2018.
          Data Entry - Sandhills Publishing - Lincoln, NE   
Data Entry is responsible for entering information into our equipment databases. Data Entry is responsible adhering to standards while reviewing this...
From Sandhills Publishing - Tue, 20 Jun 2017 16:17:20 GMT - View all Lincoln, NE jobs
          The How To Guide for Six Pack Abs   

First off I want to mention that, for most people, getting six pack abs is not an easy task. It requires serious dedication, but it is possible! If were blessed with naturally low body fat and good muscle definition, enjoy it! Otherwise, below is a general 2-step guide that, if followed religiously for 3 months, will produce results.

Step 1: Nutrition

This is the single most important part of the puzzle, hands down. You can have the most impressive set of abs, but if they're covered with a layer of fat, you won't see them! Break up your day with 5 or 6 mini-meals because this jump starts your metabolism. And stop eating the food that is preventing results: white bread, loads of pasta, soda, candy, fast food, hydrogenated oils, sugars and fructose corn syrup.

Instead, replace them with foods that will help you reach your goal: oatmeal, olive oil, whole grain breads, fruits, vegetables, nuts, peanut butter, chicken, fish, protein and water. Be realistic- you'll slip here and there, but make a conscious effort to radically improve your eating habits because getting a six pack will be impossible if you don't.

Step 2: Exercise

You need to concern yourself with 3 different exercises: cardio, weightlifting and ab exercises. And aim to workout no less than 4 times a week.

The cardio you do can be anything: walking, running, biking, swimming....whichever cardio you don't mind doing so that you'll stick with it. Aim for 30-45 minutes, a minimum of 2 times a week.

Weightlifting is important because 3 pounds of added muscle burns as many calories as a 1 mile jog...and this is while you're just sitting around! Aim for 30-45 minutes, a minimum of 2 times a week. If you're confused as to what exercises to do for each body part, check out out the following website. It features professional bodybuilders, but the information is great and can be used by anyone.

http://www.bodybuilding.com/fun/exercises.htm.

The last exercise you need to incorporate into your workout is ab exercises. Aim to work your abs a minimum of 3 times a week. There are a ton of different ab exercises you can do so try to find 3 or so that you enjoy doing so you can mix it up. A good database of different ab exercises is:

http://www.bodybuilding.com/fun/exername.php?MainMuscle=Abdominals

Well, there you have it. Follow the above for 3 months religiously, and while results will vary from person to person, you will experience improvement. It will take serious dedication on your part, but imagine the feeling you'll get when you look in the mirror and like what you see.

By Ryan Cote



          GitHub flub spaffs 8Tracks database, 18 million accounts leaked   

Passwords were salted, so there's some comfort

A staffer of social music streaming site 8Tracks is having a really bad day: a bit of GitHub user carelessness has leaked 18 million accounts.…


          Introducing Azure DB for PostgreSQL | Azure Friday   

Saloni Sonpal joins Scott Hanselman to discuss the newest offering of the Azure Database family – Azure Database for PostgreSQL, which provides a managed database service for app development and deployment with a Postgres database in minutes and scale on the fly.

For more information, see Azure Database for PostgreSQL.


          大量高速なデータストリームをリアルタイムで分析/視覚化するGPU駆動インメモリデータベースKinetica   
 … Read More
          TS/SCI CLEARED DATABASE OPERATIONS COORDINATOR   
MD-Rockville, ManpowerGroup Public Sector, an Experis affiliate, provides a full range of translation, transcription and interpreting services in 150 languages. We are located in Falls Church, VA, and serve both government and commercial clients. We are seeking cleared personnel to support an upcoming federal contract in the DC area starting in late September. The position is full-time with benefits for a multi
          Receptionist/Administrative Assistant - Hiring and Empowering - Olean, NY   
Operate &amp; own the firm CRM database with precision and excellence. All facets of position involve helping a growing Estate Planning Law firm design a path to...
From Indeed - Tue, 27 Jun 2017 20:26:19 GMT - View all Olean, NY jobs
          Reply To: Migrating a database from one WordPress site to another   

If you are moving from a non-Participants Database database to a WordPress site with Participants Database, I suggest you read this article:

Importing an Existing Database into Participants Database

If you are moving an existing Participants Database to Participants Database on another site, read this:

How to Copy Participants Database to Another Site


          How a Bitcoin Whitehat Hacker Helped the FBI Catch a Murderer   
whitehat (1).jpg

An ethical hacker breached the database of a phony darknet website offering hitman services and leaked the data. The information from the data dump helped the FBI in their investigation of a man who murdered his wife.

In November 2016, Stephen Carl Allwine, 47, of Cottage Grove, Minnesota, killed his wife in “one of the most bizarre cases ever seen,” police officers reported. The husband tried to mask the murder as a suicide, including putting a 9 mm pistol next to Amy Allwine’s elbow. However, detectives arriving on the scene identified the case as murder and collected evidence — mostly electronic devices, such as computers — belonging to Mr. Allwine. Later on, in January, investigators arrested and charged Mr. Allwine with second-degree murder based on the forensic evaluation of the confiscated electronic equipment.

In May 2016, a hacker called “bRpsd” breached the database of a controversial hitman service offered on a darknet website. The service, “Besa Mafia,” offered a link between customers and hitmen, who could register on the site anonymously. The price for a murder ranged between $5,000 and $200,000, but clients seeking to avoid fatalities could also hire a contractor to beat up a victim for $500 or set somebody’s car on fire for $1,000.

The hacker uploaded the data dump to a public internet website. The leaked files contained user accounts, email addresses, personal messages between the Besa Mafia admin and its customers, “hit” orders and a folder named “victims,” providing additional information on the targets.

The breach highlighted the fake nature of the website, which operated only to collect money from the customers. Chris Monteiro, an independent researcher who also hacked into the site, stated the owner or owners of Besa Mafia had made at least 50 bitcoins ($127,500 based on the current value of the cryptocurrency) from the scam operation.

According to a message posted by a Besa Mafia administrator and uncovered in the dump, “[T]his website is to scam criminals of their money. We report them for 2 reasons: to stop murder, this is moral and right; to avoid being charged with conspiracy to murder or association to murder, if we get caught.”

The leak of the Besa Mafia database helped the police investigating the murder of Mrs. Allwine. As the officers analyzed her husband’s devices, they discovered the suspect had accessed the dark web as early as 2014. Furthermore, investigators identified the pseudonym Mr. Allwine used on the darknet, “dogdaygod,” which was also linked to his email, “dogdaygod@hmamail.com,” in some cases. Detectives found bitcoin addresses in the conversations between Besa Mafia and Mr. Allwine, which linked the husband directly to the “dogdaygod” pseudonym, providing authorities with necessary evidence for the case.

Eventually, law enforcement agents analyzed the data dump bRpsd leaked and discovered Mr. Allwine’s email in the list. In addition, investigators found messages between the suspect and the Besa Mafia admin. According to a criminal complaint, Mr. Allwine paid between $10,000 to $15,000 to the supposed hitman service to kill his wife. The complaint detailed how Mr. Allwine had decided to have the hitman shoot Mrs. Allwine at close range and burn down the house afterward.

However, once the funds were transferred, the Besa Mafia communicator told Mr. Allwine that “local police [have] stopped the hitman [from] driving a stolen vehicle and taken [him] to jail prior to the hit,” thus rendering him unable to complete his “service.” The complaint cited Sergeant McAlister who reported that during that time, “no one was apprehended in Minnesota and western Wisconsin in a stolen vehicle and possession of a gun.”

It is likely that the ethical hacker’s data breach had an impact on Mr. Allwine’s case; on March 24, 2017, the Washington County District Court charged him with first-degree murder. In addition, officers have gathered more evidence in the case — a drug called scopolamine was discovered at 45 times higher than the recommended level in Mrs. Allwine’s body. Investigators subsequently discovered that her husband had also ordered the substance on the dark web.

The post How a Bitcoin Whitehat Hacker Helped the FBI Catch a Murderer appeared first on Bitcoin Magazine.


          QA Tester - Noviur Technologies - Vaughan, ON   
Extracting data from database to cross reference against expected results within test scripts Design, develop and maintain test plan and test cases utilizing...
From Noviur Technologies - Thu, 08 Jun 2017 04:21:22 GMT - View all Vaughan, ON jobs
          Senior Database Engineer - Barrick Gold Corporation - Cortez, NV   
Barrick Nevada is an integrated gold mining operation that combines the Cortez and Goldstrike properties in Nevada, employing a total of 3,000 employees and
From Barrick Gold Corporation - Mon, 26 Jun 2017 21:42:56 GMT - View all Cortez, NV jobs
          Business Intelligence Lead (Tableau) - Teradata - Manila   
Strong SQL and Database experience with a major RDBMS (for example Teradata, SQL Server, DB/2, Oracle). Business Intelligence Lead (Tableau)....
From Teradata - Thu, 22 Jun 2017 10:19:32 GMT - View all Manila jobs
          #341: TC39, ECMAScript, and the Future of JavaScript   
This week's JavaScript newsRead this e-mail on the Web
JavaScript Weekly
Issue 341 — June 30, 2017
A thorough explanation of how new features make it into JavaScript, before focusing on some practical examples including Array#includes, named captures, and lookbehind assertions in regexes.
Nicolás Bevacqua

Eric Bidelman runs through how to use Headless Chrome, using Karma as a runner and Mocha+Chai for authoring tests.
Google Developers

MONGODB
See the performance implications of using Lambda functions with a database-as-a-service like MongoDB Atlas.
MONGODB   Sponsor

The official spec for ES2017 (essentially the 8th edition of the JS spec) has been published in HTML and PDF if you’re lacking for bedtime reading.
ECMA

A full-stack app framework built on React and GraphQL. It’s an evolution of Telescope but is becoming less Meteor-dependent.
Sacha Greif

A well presented tutorial site complete with rich, live editable examples.
James K Nelson

Includes support for cynamic import() expressions, string enums, & improved checking.
Microsoft

React Status is our React focused weekly. This week it includes a React Native starter kit and an introduction to Redux-first routing.
React Status

Jobs Supported by Hired.com

Can't find the right job? Want companies to apply to you? Try Hired.com.

In Brief

Babylon, Babel's JS Parser, To Support TypeScript news
Not a lot to see yet, but .ts support has been baked in.
Babel

Microsoft's 'Sonar' Linting Tool Joins the JS Foundation news
Check out Sonar’s homepage to learn more.
Kris Borchers

Using Source Maps to Debug Errors tutorial
Let's talk JavaScript Source Maps. What are they? How to enable source mapping? Why aren't they working?
ROLLBAR  Sponsor

Getting Started with Webpack 3 tutorial
João Augusto

The 'Best' Frontend JavaScript Interview Questions tutorial
Opinions will vary but if you do well at these, you’re in a strong position.
Boris Cherny

Build A Realtime Chart with Vue.js and Pusher tutorial
Yomi Eluwande

Unleash The Power of Angular Reactive Forms video
Oriented around live coding a form from scratch.
Nir Kaufman

Use AngularJS to Build a Fast and Secure Chat App 
PubNub gets your data anywhere in less than 0.25 seconds. It’s so easy with PubNub’s AngularJS library.
PubNub  Sponsor

Choosing A Frontend Framework in 2017 opinion
This Dot Labs

Why I'm Switching from React to Cycle.js opinion
SitePoint

An Up to Date List of TC39 Proposals and their Status tools
Nicolás Bevacqua

Search and Install npm Modules Automatically from the Atom Editor tools
Algolia

Decaffeinate: Convert Your CoffeeScript to Modern JavaScript tools
A well established project that continues to get frequent updates.
Brian Donovan

Infinite Scroll v3: As Users Scroll, Automatically Load More tools
Note it’s both GPL3 and commercial.
Metafizzy

Study: A Progressive, Client/Server AB Testing Library code
Dollar Shave Club

echarts: Powerful Charting and Visualization in the Browser code
Lots of demos here.
Baidu

vanilla-tilt.js: A Dependency-Free, Smooth 3D Element Tilting Library code

RE:DOM: A Tiny (2KB) JS Library for Creating User Interfaces code
Juha Lindstedt

frontexpress: An Express.js-Style JavaScript Router for the Front-End code
Camel Aissani

ForwardJS Tickets on sale today 
ForwardJS  Sponsor

Curated by Peter Cooper and published by Cooperpress.

Like this? You may also enjoy: FrontEnd Focus : Node Weekly : React Status

Stop getting JavaScript Weekly : Change email address : Read this issue on the Web

© Cooperpress Ltd. Office 30, Lincoln Way, Louth, LN11 0LS, UK


          Sweden Wind Power Capacity, Generation, Levelized Cost of Energy - Industry Analysis, Trends, and Forecast Report 2030   
 

Market Research Hub




(EMAILWIRE.COM, June 30, 2017 ) Market Research Hub (MRH) has recently announced the addition of a fresh report, titled “Sweden Wind Power Market Outlook to 2030, Update 2017 - Capacity, Generation, Levelized Cost of Energy (LCOE), Investment Trends, Regulations and Company Profiles” to its report offerings. The report provides in depth analysis on global renewable power market and global wind power market with forecasts up to 2030.

Request Free Sample Report: http://www.marketresearchhub.com/enquiry.php?type=S&repid=699466

"Wind Power in Sweden, Market Outlook to 2030, Update 2016 - Capacity, Generation, Levelized Cost of Energy (LCOE), Investment Trends, Regulations and Company Profiles is the latest report from GlobalData, the industry analysis specialists that offer comprehensive information and understanding of the wind power market in Sweden.

The report provides in depth analysis on global renewable power market and global wind power market with forecasts up to 2030. The report analyzes the power market scenario in Sweden (includes conventional thermal, nuclear, large hydro and renewable energy sources) and provides future outlook with forecasts up to 2030. The research details renewable power market outlook in the country (includes wind, small hydro, biopower and solar PV) and provides forecasts up to 2030. The report highlights installed capacity and power generation trends from 2006 to 2030 in Sweden wind power market. A detailed coverage of renewable energy policy framework governing the market with specific policies pertaining to wind power is provided in the report. The research also provides company snapshots of some of the major market participants.

The report is built using data and information sourced from proprietary databases, secondary research and in-house analysis by GlobalDatas team of industry experts.

Scope
The report analyses global renewable power market, global wind power (Onshore and Offshore) market, Sweden power market, Sweden renewable power market and Sweden wind power market. The scope of the research includes -
- A brief introduction on global carbon emissions and global primary energy consumption.
- An overview on global renewable power market, highlighting installed capacity trends, generation trends and installed capacity split by various renewable power sources. The information is covered for the historical period 2006-2015 (unless specified) and forecast period 2015-2030.
- Renewable power sources include wind (both onshore and offshore), solar photovoltaic (PV), concentrated solar power (CSP), small hydropower (SHP), biomass, biogas and geothermal.
- Detailed overview of the global wind power market with installed capacity and generation trends, installed capacity split by major hydropower countries in 2015 and key owners information of various regions.
- Power market scenario in Sweden and provides detailed market overview, installed capacity and power generation trends by various fuel types (includes thermal conventional, nuclear, large hydro and renewable energy sources) with forecasts up to 2030.
- An overview on Sweden renewable power market, highlighting installed capacity trends (2006-2030), generation trends(2006-2030) and installed capacity split by various renewable power sources in 2015.
- Detailed overview of Sweden wind power market with installed capacity and generation trends and major active and upcoming wind projects.
- Deal analysis of Sweden wind power market. Deals are analyzed on the basis of mergers, acquisitions, partnership, asset finance, debt offering, equity offering, private equity (PE) and venture capitalists (VC).
- Key policies and regulatory framework supporting the development of renewable power sources in general and wind power in particular.
- Company snapshots of some of the major market participants in the country.

Reasons to buy
- The report will enhance your decision making capability in a more rapid and time sensitive manner.
- Identify key growth and investment opportunities in Sweden wind power market.
- Facilitate decision-making based on strong historic and forecast data for wind power market.
- Position yourself to gain the maximum advantage of the industrys growth potential.
- Develop strategies based on the latest regulatory events.
- Identify key partners and business development avenues.
- Understand and respond to your competitors business structure, strategy and prospects.

Read Full Report with TOC: http://www.marketresearchhub.com/report/wind-power-in-sweden-market-outlook-to-2030-update-2016-capacity-generation-levelized-cost-of-energy-lcoe-investment-trends-regulations-and-company-profiles-report.html

Table of Contents:

1 Table of Contents 2
1.1 List of Tables 6
1.2 List of Figures 7
2 Executive Summary 8
2.1 Government Support in Conjunction with Technology Development Driving Global Renewable Power Installations 8
2.2 Top 10 Countries Account for Over 84% of Wind Power Capacity 8
2.3 Renewable to Account for a Maximum Share of Installed Capacity by 2030 9
2.4 Wind Power to become One of the Primary Sources of Electricity in the Future 10
3 Introduction 11
3.1 Carbon Emissions, Global, 2001-2015 11
3.2 Primary Energy Consumption, Global, 2001-2025 13
3.3 Wind Power, Global, Technology Definition and Classification 15
3.4 Wind Power Market, Technology Overview 15
3.5 Wind Power Market, Turbine Components 16
3.6 Report Guidance 18
4 Renewable Power Market, Global, 2006 - 2030 19
4.1 Renewable Power Market, Global, Overview 19
4.2 Renewable Power Market, Global, Installed Capacity, 2006-2030 21
4.2.1 Renewable Power Market, Global, Cumulative Installed Capacity by Source Type, 2006-2030 21
4.2.2 Renewable Power Market, Global, Cumulative Installed Capacity Split by Source Type, 2015 and. 2030 23

Make an Enquiry: http://www.marketresearchhub.com/enquiry.php?type=enquiry&repid=699466

About Market Research Hub:

Market Research Hub (MRH) is a next-generation reseller of research reports and analysis. MRH’s expansive collection of Wind Power Market Research Reports has been carefully curated to help key personnel and decision makers across industry verticals to clearly visualize their operating environment and take strategic steps.

MRH functions as an integrated platform for the following products and services: Objective and sound market forecasts, qualitative and quantitative analysis, incisive insight into defining industry trends, and market share estimates. Our reputation lies in delivering value and world-class capabilities to our clients.

Contact Details:

90 State Street,

Albany, NY 12207,

United States

Toll Free: 866-997-4948 (US-Canada)

Tel: +1-518-621-2074

Email: press@marketresearchhub.com

Website: http://www.marketresearchhub.com/

Read Industry News @ https://www.industrynewsanalysis.com/


Sudip S
+1-518-621-2074
sales@marketresearchhub.com

Source: EmailWire.Com
          Sweden Hydropower Capacity, Generation, Regulations - Industry Analysis, Trends, and Forecast Report 2030   
 

Market Research Hub




(EMAILWIRE.COM, June 30, 2017 ) Market Research Hub (MRH) has recently announced the addition of a fresh report, titled “Sweden Hydropower Market Outlook to 2030, Update 2017 - Capacity, Generation, Regulations and Company Profiles” to its report offerings. The report provides in depth analysis on global renewable power market and global hydropower market with forecasts up to 2030.

Request Free Sample Report: http://www.marketresearchhub.com/enquiry.php?type=S&repid=1196694

"Hydropower (Large, Small and Pumped Storage) in Sweden, Market Outlook to 2030, Update 2017 - Capacity, Generation, Regulations and Company Profiles" is the latest report from GlobalData, the industry analysis specialists that offer comprehensive information and understanding of the hydropower market in Sweden.

The report provides in depth analysis on global renewable power market and global hydropower market with forecasts up to 2030. The report analyzes the power market scenario in Sweden (includes conventional thermal, nuclear, large hydro and renewable energy sources) and provides future outlook with forecasts up to 2030. The research details renewable power market outlook in the country (includes hydro, small hydro, biopower and solar PV) and provides forecasts up to 2030. The report highlights installed capacity and power generation trends from 2006 to 2030 in Sweden hydropower market. A detailed coverage of renewable energy policy framework governing the market with specific policies pertaining to hydropower is provided in the report. The research also provides company snapshots of some of the major market participants.

The report is built using data and information sourced from proprietary databases, secondary research and in-house analysis by GlobalDatas team of industry experts.
Scope
The report analyses global renewable power market, global hydropower market, Sweden power market, Sweden renewable power market and Sweden hydropower market. The scope of the research includes -
- A brief introduction on global carbon emissions and global primary energy consumption.
- An overview on global renewable power market, highlighting installed capacity trends, generation trends and installed capacity split by various renewable power sources. The information is covered for the historical period 2006-2016 (unless specified) and forecast period 2017-2030.
- Renewable power sources include wind (both onshore and offshore), solar photovoltaic (PV), concentrated solar power (CSP), small hydropower (SHP), biomass, biogas and geothermal.
- Detailed overview of the global hydropower market with installed capacity and generation trends, installed capacity split by major hydropower countries in 2016 and key owners information of various regions.
- Power market scenario in Sweden and provides detailed market overview, installed capacity and power generation trends by various fuel types (includes thermal conventional, nuclear, large hydro and renewable energy sources) with forecasts up to 2030.
- An overview on Sweden renewable power market, highlighting installed capacity trends (2006-2030), generation trends(2006-2030) and installed capacity split by various renewable power sources in 2016.
- Detailed overview of Sweden hydropower market with installed capacity and generation trends and major active and upcoming hydro projects.
- Deal analysis of Sweden hydropower market. Deals are analyzed on the basis of mergers, acquisitions, partnership, asset finance, debt offering, equity offering, private equity (PE) and venture capitalists (VC).
- Key policies and regulatory framework supporting the development of renewable power sources in general and hydropower in particular.
- Company snapshots of some of the major market participants in the country.
Reasons to buy
- The report will enhance your decision making capability in a more rapid and time sensitive manner.
- Identify key growth and investment opportunities in Sweden hydropower market.
- Facilitate decision-making based on strong historic and forecast data for hydropower market.
- Position yourself to gain the maximum advantage of the industrys growth potential.
- Develop strategies based on the latest regulatory events.
- Identify key partners and business development avenues.
- Understand and respond to your competitors business structure, strategy and prospects.

Read Full Report with TOC: http://www.marketresearchhub.com/report/hydropower-large-small-and-pumped-storage-in-sweden-market-outlook-to-2030-update-2017-capacity-generation-regulations-and-company-profiles-report.html

Table of Contents:

1 Table of Contents 2
1.1 List of Tables 5
1.2 List of Figures 6
2 Executive Summary 7
2.1 Fall in OECD Countries Carbon Emission despite a Global Rise during 2010-2015 7
2.2 Technological Advancements and Government Support Driving Global Renewable Power Installations 7
2.3 Top 10 Countries Account for Over 70% of Hydropower Capacity 7
2.4 Renewable to Stock Up Maximum Installed Capacity by 2030 8
2.5 Hydropower Capacity Dominates Electricity Generation in Sweden 9
3 Introduction 10
3.1 Carbon Emissions, Global, 2001-2016 10
3.2 Primary Energy Consumption, Global, 2001-2025 12
3.3 Hydropower, Global, Technology Definition and Classification 14
3.4 Report Guidance 16
4 Renewable Power Market, Global, 2006-2030 17
4.1 Renewable Power Market, Global, Overview 17
4.2 Renewable Power Market, Global, Installed Capacity, 2006-2030 18
4.2.1 Renewable Power Market, Global, Cumulative Installed Capacity by Source Type, 2006-2030 18
4.2.2 Renewable Power Market, Global, Cumulative Installed Capacity Split by Source Type, 2016 and 2030 20
4.2.3 Renewable Power Market, Global, Net Capacity Additions by Source Type, 2016-2030 22

Make an Enquiry: http://www.marketresearchhub.com/enquiry.php?type=enquiry&repid=1196694

About Market Research Hub:

Market Research Hub (MRH) is a next-generation reseller of research reports and analysis. MRH’s expansive collection of market research reports has been carefully curated to help key personnel and decision makers across industry verticals to clearly visualize their operating environment and take strategic steps.

MRH functions as an integrated platform for the following products and services: Objective and sound market forecasts, qualitative and quantitative analysis, incisive insight into defining industry trends, and market share estimates. Our reputation lies in delivering value and world-class capabilities to our clients.

Contact Details:

90 State Street,

Albany, NY 12207,

United States

Toll Free: 866-997-4948 (US-Canada)

Tel: +1-518-621-2074

Email: press@marketresearchhub.com

Website: http://www.marketresearchhub.com/

Read Industry News @ https://www.industrynewsanalysis.com/


Sudip S
+1-518-621-2074
sales@marketresearchhub.com

Source: EmailWire.Com
          Re: Problem with triggers in database   

by Darko Miletić.  

Does your database user used with Moodle have TRIGGER prviliege enabled?

https://dev.mysql.com/doc/refman/5.7/en/privileges-provided.html#priv_trigger

You did not provide any specific error and that is required if you want more help.


          Problem with triggers in database   

by Vladimir Miranovic.  

Hello,

I have a problem in my Organization, we have a Moodle (2.9 and 3.2) with online courses and we need to remove completed or dropped enrollments from our moodle site/courses, but also we need to keep records of all enrollments, completions and group membership, for reporting (quarterly, yearly or ad-hoc).

For that reason I got idea to create copy of Moodle original tables in the same database (live Moodle). For example for enrol (original table) I made xenrol (copy of original table, with less fields), and for course (original table) I made xcourse (copy of original table, with less fields). I also made user_enrolments, groups, groups_members, course_completions, and for user I don’t need a copy, as also for grades (grades_grades_history), and certificates (there is a plugin for keeping cetificates).

After that I created a trigger on the after insert event on original course table (and similar on other tables).

DELIMITER $$

CREATE TRIGGER writetoxcourse

AFTER INSERT

ON m320_course FOR EACH ROW

BEGIN

                INSERT INTO m320_xcourse (xid, xcategory, xfullname, xshortname, xidnumber, xstartdate, xenddate, xgroupmode, xgroupmodeforce, xtimecreated, xtimemodified)

                VALUES (id, category, fullname, shortname , idnumber, startdate, enddate, groupmode, groupmodeforce, timecreated, timemodified);

END$$

DELIMITER ;

 

However, when I try to create new course, I get error from Moodle (same with other tables), and when I remove trigger from course table everything is functioning normally.

Idea of make copies of the tables in same database coming from simplicity of accessing and doing reports from standard “Configurable Reports” or “Ad-hoc”.

Maybe someone will write local plugin with this “Mini SYS in Moodle” in the future, but we need solution now (our funding depends on reporting), please help.

 



          Online Code Sessions on JAX-RS, JSON API, and Java EE   

With the Java EE 8 release coming up this summer, now is the best time to get up to speed with important APIs. You are invited to a free Code Online webinar to learn about key Java EE 8 APIs and how to connect your mobile applications to a Java EE backend. Join us for three hours of server-side development. For your convenience, the webinars are available in three time zones. 

June 20 at 9:30 AM PDT -   Oracle Code Online - Americas   
June 21 at 9:30 AM IST -   Oracle Code Online - APAC   
June 22 at 9:30 AM CEST -  Oracle Code Online - EMEA    

Here are the sessions! 

Reactive REST Clients in a Microservices Landscape with David Delabassee. When designing microservices exchanges, REST is clearly the most popular approach, i.e. the de-facto standard. JAX-RS API hides all the low-level details behind RESTful calls. Complexity really starts to arise when multiple remote services need to be consumed in highly efficient manner. During this technical session, we will cover in details different solutions and best practices to efficiently consume REST services. This includes: 

- Synchronous Vs. Asynchronous
- Jersey Reactive Client API
- Popular Reactive libraries (e.g. RxJava)
- JAX-RS 2.1 Client API

How to use the new JSON Binding API with Dmitry Kornilov. JSON support is an important part of Java EE. This session provides a deep dive into JSON-P and JSON-B APIs explains how they are connected and can be used together. We will introduce and demonstrate new JSON-P features such as JSON Patch, JSON Pointer, and JSON Merge Patch as well as JSON-B features such as default and customized mapping, adapters, and serializers.

Enterprise Functionality for Mobile Apps with Johan Vos

Today an increasing number of companies and organizations are facing demands from their users (customers, partners, employees, and end-users) to make their enterprise functionality available via mobile apps. While many concepts that are used on the web also apply to mobile apps, the users of those apps typically expect more than just a website. In this session, we will demonstrate how you can reuse your existing investments in enterprise code and infrastructure, and easily add a mobile channel. We will demonstrate how the Oracle Cloud provides a great platform for bridging the gap between your enterprise code and the mobile apps your users are asking for.

Register now for the event in your timezone. These sessions are part of the server-side track. You will also have access to four more tracks including database, mobile, DevOps and full stack tracks.  


          Java Magazine Edition about Libraries    

By Guest Blogger Andrew Binstock 

In an age of frameworks, there still remains a supreme need for libraries, those useful collections of classes and methods that save us a huge amount of work. For all the words spilled on the reusability of object orientation (OO), it’s clear that code reuse has been consistently successful only at the library level. It’s hard to say whether that’s a failure of the promises of OO or whether those promises were unlikely to ever deliver the hoped-for reusability. 

In Stephen Colebourne’s article (page 28), he gives best practices for writing libraries of your own. Colebourne is the author of the celebrated Joda-Time library, which was the standard non-JDK time and date library prior to Java SE 8. In the article, he gives best practices for architecting the library and shares guidelines he has learned along the way that sometimes fly in the face of generally accepted programming precepts. Writing your own library? Then start here

We also examine three well-designed libraries that provide useful functionality but might not be widely known. The first of these is Project Lombok (page 10), which uses annotations to greatly reduce the writing of boilerplate code—leading to fewer keystrokes and much more readable code. Andrés Almiray’s article on the JDeferred library (page 16) is a deep dive into the concepts of futures and promises, which are techniques for defining, invoking, and getting results from asynchronous operations. The built-in Java classes for futures and promises work well but can be difficult to program. JDeferred removes the difficulty and, like Lombok, leads to considerably cleaner code. 

Finally, we revisit an article we ran a year ago on jsoup (page 22), which is one of the finest ways of handling HTML: parsing, scraping, manipulating, and even generating it. 

If libraries are not your favorite topic, we have you covered with a detailed discussion (page 34) of how to use streaming syntax rather than SQL when accessing databases. In addition, we offer our usual quiz (this time with the inclusion of questions from the entry-level exam), our calendar of events, and other goodness. (Note that our next issue will be a jumbo special issue on Java 9.) 


          Congratulations New Java Champion Oliver Gierke   

Welcome New Java Champion Oliver Gierke

Oliver Gierke is leading the Spring Data project at Pivotal. He is an active member of the JCP expert group on JPA 2.1 and one of the main organizers of the JUG Saxony Day, OOP, JAX and WJAX conferences.

Oliver coined the Spring Data repository programming model which is a widely used Java abstraction to develop data access layers for relational and non-relational databases. This simplifies the way Java developers interact with persistence technologies as Spring Data provides an abstraction over APIs such as JPA. He is one of the leading experts on JPA and other persistence technologies. With Spring Data REST, he helped Java developers implement REST APIs. He also coined the Spring HATEOAS module and helped Java developers use hypermedia elements in REST APIs when using Spring MVC or JAX-RS.

Oliver is a consulting lecturer at the Chair of Software Engineering at TU Dresden helping students to get started with application development in Java. All of his material is available online: http://static.olivergierke.de/lectures/ This makes it easy for student developers to experiment with Java and receive a professional introduction to the language and Java development practices. 

Oliver contributes almost daily to diverse Open Source frameworks on Github, see https://github.com/olivergierke.  He is a frequent speaker at many conferences including BEDcon, OOP, JavaZone, Devoxx, SpringOne, JavaOne, JFokus, Øredev to name a few. Follow him at @olivergierke


          OAC: Essbase – Loading Data   
After my initial quick pass through Essbase under OAC here, this post looks at the data loading options available in more detail. I used the provided sample database ASOSamp.Basic, which first had to be created, as a working example. Creating ASOSamp Under the time-honoured on-prem install of Essbase, the sample applications were available as an […]
          PeopleSoft and Adaptive Query Optimization in Oracle 12c   
Adaptive Query Optimization is a significant feature in Oracle 12c. Oracle has made lots of information available on the subject.(See https://blogs.oracle.com/optimizer/oracle-database-12c-is-here).Adaptive Query Optimization is a set of capabilities that enable the optimizer to make run-time adjustments to execution plans and discover additional information that can lead to better statistics…There are two distinct aspects in Adaptive […]
          Read Oracle Database 12.2 New Features Manual   
I just finished reading the Oracle database 12c new features manual. I have postponed looking at 12.2 until now because for a long time 12.2 was not available for download even though it was available in Oracle’s cloud. Once the download became available I installed 12.2 in a test virtual machine but did not get […]
          Commence Industrial CRM Profiled by leading Manufacturing Journalist   

Commence Industrial CRM Profiled by leading Manufacturing Journalist

Tinton Falls, NJ -- (ReleaseWire) -- 12/14/2006 -- According to Larry Caretsky, President of Commence Corporation (www.commence.com/mfg./), "There is rarely one central database of customer information that can be accessed and shared among the people who need it to efficiently do their jobs. As a result, acting less like a team, these people act independently when conducting business and are far less effective.

Caretsky, noted in the feature article that "CEOs of these companies often share how their new enterprise resource planning (ERP) system provides them all the information they need, but fail to recognize that ERP systems provide information after the sale, not before or during the sales process. ERP systems provide no value for improving the efficiency of how to sell and service customers. This is one reason that forecast reports are always inaccurate."

According to Thomas R. Cutler, Manufacturing Journalist in a recent issue of Industrial Focus, "The foundation of any quality sales organization starts with the implementation and management of a sales process. These are the steps required by the sales representative to move the prospect from the initial introduction stage to the closing stage. Few industrial CRM systems provide manufacturers with a structured proven sales process or methodology for evaluating and managing each stage of the sales cycle. A pro-active approach to managing the sales process allows the sales manager to monitor and provide guidance during the cycle, as well as help sales representatives focus on the best opportunities." The article may be read in its entirety at http://www.trcutlerinc.com/54-55.pdf.

Commence offers industrial companies complete "Freedom Of Choice" to select the solutions and platform that best meets the business requirements of manufacturers and distributors. The comprehensive CRM Industrial application suite is available for use on premise or on-demand as a hosted service. Industrial leaders often build departmental CRM solutions with the award winning Commence Industrial CRM Framework. These choices are why so many industrial companies choose Commence as the solution for managing customer relationships. All Commence Industrial solutions support mobile or wireless connectivity and integration to back-office accounting and ERP systems.

In an effort to help industrial distributors and manufacturers thrive, Commence Corporation presents Practices That Pay: Leveraging Information to Achieve Industrial Selling Results, a compendium of smart practices from the leading industrial sales and marketing experts and organizations that are growing in today's challenging environment.

For more information on this press release visit: http://www.releasewire.com/press-releases/release-9644.htm

Media Relations Contact

Larry Caretsky
President
Commence
Telephone: 732-380-9100
Email: Click to Email Larry Caretsky
Web: http://www.commence.com/mfg/


          Industrial CRM Is Not Generic CRM   

Industrial CRM Is Not Generic CRM

Tinton Falls, NJ -- (ReleaseWire) -- 12/18/2006 -- Commence offers lean industrial companies complete Freedom Of Choice to select the solutions and platform that best meets the business requirements of manufacturers and distributors. The comprehensive CRM Industrial application suite is available for use on premise or on-demand as a hosted service. Industrial leaders often build departmental lean CRM solutions with the award winning Commence Lean Industrial CRM Framework. These choices are why so many industrial companies choose Commence as the solution for managing customer relationships. All Commence Industrial solutions support mobile or wireless connectivity and integration to back-office accounting and ERP systems.

Unlike generic CRM (Customer Relationship Management) software solutions, The Commence Contact Management module is designed to increase employee productivity by enabling them to capture, track, manage and share all client specific or prospect specific information. The information is stored in a single unified database where is it immediately accessible to all authorized employees without having to move from system to system. This results in a manufacturer's ability to significantly improve the process of marketing, selling and servicing customers.

Key Features according the Commence President & CEO, Larry Caretsky, include the following:

• Account/Contact Management Capture, manage and share a complete account and contact
• profile including history with others throughout the organization.
• Calendar & Activity Management (Notes, History, Attachments)

Synchronize calendars and address books, manage activity remotely. (personal & groups)

• Time Management View pending appointments and to-dos by due date and action type. Receive reminders when appointments and to-dos are due.
• Advanced Desktop Integration MS Outlook/Word/Excel Utilize desktop tools to help manage your daily business, e-mails, e-mail logging, letter templates, proposals, quotes and contracts.
• Activity Tracking Associate all correspondence including, calls, meetings, emails, and service history with the account, contact and responsible employee.
• File Attachments Track external documents and files associated with an account, contact, opportunity, and service ticket.
• Mail merges /Letter Templates Create email, printed, or fax mailings to select contacts or a list of accounts.
• Business Process Automation Automate specific business task and functions based on selected criteria.
• Business Alerts/Alarms/Notification Automatically receive notification when specific business conditions are met.
• Mobile Stay in touch and manage activity while away from the office. Synchronize and work off-line.

For more information on this press release visit: http://www.releasewire.com/press-releases/release-9653.htm

Media Relations Contact

Larry Caretsky
President
Commence
Telephone: 732-380-9100
Email: Click to Email Larry Caretsky
Web: http://www.commence.com/mfg/


          Industrial CRM by Commence Spells out Competitive Advantages   

Industrial CRM by Commence Spells out Competitive Advantages

Tinton Falls, NJ -- (ReleaseWire) -- 12/01/2006 -- Key Industrial CRM features according the Commence President & CEO, Larry Caretsky, include the following:

Account/Contact Management Capture, manage and share a complete account and contact profile including history with others throughout the organization.

Calendar & Activity Management (Notes, History, Attachments) Synchronize calendars and address books, manage activity remotely. (personal & groups)

Time Management View pending appointments and to-dos by due date and action type. Receive reminders when appointments and to-dos are due.

Advanced Desktop Integration MS Outlook/Word/Excel Utilize desktop tools to help manage your daily business, e-mails, e-mail logging, letter templates, proposals, quotes and contracts.

Activity Tracking Associate all correspondence including, calls, meetings, emails, and service history with the account, contact and responsible employee.

File Attachments Track external documents and files associated with an account, contact, opportunity, and service ticket.

Mail merges /Letter Templates Create email, printed, or fax mailings to select contacts or a list of accounts.

Business Process Automation Automate specific business task and functions based on selected criteria.

Business Alerts/Alarms/Notification Automatically receive notification when specific business conditions are met.

Mobile Stay in touch and manage activity while away from the office.

Synchronize and work off-line.
Unlike generic CRM (Customer Relationship Management) software solutions, The Commence Contact Management module is designed to increase employee productivity by enabling them to capture, track, manage and share all client specific or prospect specific information. The information is stored in a single unified database where it is immediately accessible to all authorized employees without having to move from system to system. This results in a manufacturer's ability to significantly improve the process of marketing, selling and servicing customers.

Commence offers lean industrial companies complete "Freedom Of Choice" to select the solutions and platform that best meets the business requirements of manufacturers and distributors. The comprehensive CRM Industrial application suite is available for use on premise or on-demand as a hosted service. Industrial leaders often build departmental lean CRM solutions with the award winning Commence Lean Industrial CRM Framework. These choices are why so many industrial companies choose Commence as the solution for managing customer relationships. All Commence Industrial solutions support mobile or wireless connectivity and integration to back-office accounting and ERP systems.

For more information on this press release visit: http://www.releasewire.com/press-releases/release-9408.htm

Media Relations Contact

Larry Caretsky
President
Commence
Telephone: 732-380-9100
Email: Click to Email Larry Caretsky
Web: http://www.commence.com/mfg/


          Commence Industrial CRM Profiled by Manufacturing Journalist Thomas R. Cutler   

Commence Industrial CRM Profiled by Manufacturing Journalist Thomas R. Cutler

Tinton Falls, NJ -- (ReleaseWire) -- 11/29/2006 -- According to Thomas R. Cutler, Manufacturing Journalist in a recent issue of Industrial Focus, "The foundation of any quality sales organization starts with the implementation and management of a sales process. These are the steps required by the sales representative to move the prospect from the initial introduction stage to the closing stage. Few industrial CRM systems provide manufacturers with a structured proven sales process or methodology for evaluating and managing each stage of the sales cycle. A pro-active approach to managing the sales process allows the sales manager to monitor and provide guidance during the cycle, as well as help sales representatives focus on the best opportunities." The article may be read in its entirety at http://www.trcutlerinc.com/54-55.pdf.

According to Larry Caretsky, President of Commence Corporation (www.commence.com/mfg./), "There is rarely one central database of customer information that can be accessed and shared among the people who need it to efficiently do their jobs. As a result, acting less like a team, these people act independently when conducting business and are far less effective.

Caretsky, noted in the feature article that "CEOs of these companies often share how their new enterprise resource planning (ERP) system provides them all the information they need, but fail to recognize that ERP systems provide information after the sale, not before or during the sales process. ERP systems provide no value for improving the efficiency of how to sell and service customers. This is one reason that forecast reports are always inaccurate."

Commence offers industrial companies complete "Freedom Of Choice" to select the solutions and platform that best meets the business requirements of manufacturers and distributors. The comprehensive CRM Industrial application suite is available for use on premise or on-demand as a hosted service. Industrial leaders often build departmental CRM solutions with the award winning Commence Industrial CRM Framework. These choices are why so many industrial companies choose Commence as the solution for managing customer relationships. All Commence Industrial solutions support mobile or wireless connectivity and integration to back-office accounting and ERP systems.

In an effort to help industrial distributors and manufacturers thrive, Commence Corporation presents Practices That Pay: Leveraging Information to Achieve Industrial Selling Results, a compendium of smart practices from the leading industrial sales and marketing experts and organizations that are growing in today's challenging environment.

For more information on this press release visit: http://www.releasewire.com/press-releases/release-9346.htm

Media Relations Contact

Larry Caretsky
President
Commence
Telephone: 732-380-9100
Email: Click to Email Larry Caretsky
Web: http://www.commence.com/mfg/


          Commence Lean Industrial CRM Teaches Methods to Avoid Dirty Data   

Commence Lean Industrial CRM Teaches Methods to Avoid Dirty Data

Tinton Falls, NJ -- (ReleaseWire) -- 11/22/2006 -- In an effort to help industrial distributors and manufacturers thrive, Commence Corporation presents Practices That Pay: Leveraging Information to Achieve Industrial Selling Results, a compendium of smart practices from the leading industrial sales and marketing experts and organizations that are growing in today's challenging environment.

According to Larry Caretsky, President of Commence Corporation (www.commence.com/mfg./), "One of the most important things you can do when you decide to automate sales is to have one person responsible for the administration of your database. Unless you have a particularly business savvy IT department with hours to spare, you should probably look for an employee within your marketing and sales team. This person would maintain the data quality, reviewing new data that your team enters, and adjusting it as necessary. These people typically become experts at other valuable skills, such as creating reports, making minor customizations, and can be great at training new employees to use the system."

Caretsky suggests that Lean CRM is has many important variables. "With unreliable data, industrial distributors and manufacturers have reported they were unable to reap most of the benefits of a sales and marketing database. Salespeople will quickly realize the data is inaccurate, and revert to old habits to make sure they have the information they need to service customers. Another potential downfall is reduced customer service. The last thing you want is for customers to feel like you don't know them, especially if you've been doing business with them for years. But this is exactly what can happen if customer data is entered incorrectly, out of date, or mismanaged. This is problem is exacerbated by the prevalence of legacy systems in industrial organizations with decades of "dirty data". "

Commence offers lean industrial companies complete "Freedom Of Choice" to select the solutions and platform that best meets the business requirements of manufacturers and distributors. The comprehensive CRM Industrial application suite is available for use on premise or on-demand as a hosted service. Industrial leaders often build departmental lean CRM solutions with the award winning Commence Lean Industrial CRM Framework. These choices are why so many industrial companies choose Commence as the solution for managing customer relationships. All Commence Industrial solutions support mobile or wireless connectivity and integration to back-office accounting and ERP systems.

For more information on this press release visit: http://www.releasewire.com/press-releases/release-9095.htm

Media Relations Contact

Larry Caretsky
President
Commence
Telephone: 732-380-9100
Email: Click to Email Larry Caretsky
Web: http://www.commence.com/mfg/


          CRM on Demand Provides Unique Platform Benefits   

CRM on Demand Provides Unique Platform Benefits

Tinton Falls, NJ -- (ReleaseWire) -- 11/13/2006 -- Commence CRM On-Demand provides breakthrough technology for Manufacturers and distributors with greater functionality and flexibility than traditional application service offerings. Using the industrial strength JAVA (J2EE) platform, Commence CRM On-Demand offers robust functionality, ease of use and limitless scalability. Customization capabilities allow user-defined fields, custom reports, queries, filters; even personalized desktop settings.

Platform Highlights

Enterprise Class Platform
Encrypted Database Security
Automated Processes
One-to-Many Data Relationships
Remote Synchronization
Built-in Report Writer
Multi-level Security
Centralized File management
Mail Merge with MS Office
Web E-Mail Client
Global Search
On-Line Help and Knowledgebase Facility
Support for Handheld Devices
Project Tracking
Group Calendar & Scheduling
E-mail Integration
Web Integration
Application Programming Interface (API)

Platform Benefits include:

Scalable enterprise class platform Flexible architecture that promotes customization and add-on functionality. Operate quickly without IT infrastructure cost. Customizable without the headaches of traditional On-Demand offerings. Integrate people, processes and technology for improved performance and agility.

Commence CRM On-Demand allows manufacturers and distributors to focus on sales efficiency and customer service, monitor and improve business performance and drive higher profits by bringing down cost through streamlining processes.

Commence offers lean industrial companies complete "Freedom Of Choice" to select the solutions and platform that best meets the business requirements of manufacturers and distributors. The comprehensive CRM Industrial application suite is available for use on premise or on-demand as a hosted service. Industrial leaders often build departmental lean CRM solutions with the award winning Commence Lean Industrial CRM Framework. These choices are why so many industrial companies choose Commence as the solution for managing customer relationships. All Commence Industrial solutions support mobile or wireless connectivity and integration to back-office accounting and ERP systems.

For more information on this press release visit: http://www.releasewire.com/press-releases/release-9090.htm

Media Relations Contact

Larry Caretsky
President
Commence
Telephone: 732-380-9100
Email: Click to Email Larry Caretsky
Web: http://www.commence.com/mfg/


          Informing Business Decisions with Lean CRM   

Informing Business Decisions with Lean CRM

Tinton Falls, NJ -- (ReleaseWire) -- 11/08/2006 -- According to Larry Caretsky, President of Commence Corporation (www.commence.com/mfg./), "Analyze and review the data to inform business decisions is critical for Industrial executives. It is one of the most valuable parts of lean CRM sales automation system is the data accumulated after a few months of use."

To mine this data, smart industrial distributors and manufacturers have followed these steps.

1. Identify the business problem.
2. Mine data to transform data into actionable information.
3. Act on the information.
4. Measure the results.

Caretsky cites an example: A sales manager at an electrical distributor instinctively knew that some customers were getting too much service based on their volume of annual sales. To verify his gut feeling, he mined data in his sales and marketing database to see the number of quotes, calls, and service tickets for each customer and compared it to the total volume of sales in the previous 12 months. A list of 25 customers were obviously taking a lot of the sales team's effort, but delivering comparably little revenue. The sales manager then worked with each account's sales rep to come up with a plan to either increase the revenue from these customers or reduce the time spent servicing them. After 3 months of their efforts, he reviewed the data, and found that significant profitability improvements had been made.

In an effort to help industrial distributors and manufacturers thrive, Commence Corporation presents Practices That Pay: Leveraging Information to Achieve Industrial Selling Results, a compendium of smart practices from the leading industrial sales and marketing experts and organizations that are growing in today's challenging environment.

Commence offers lean industrial companies complete "Freedom Of Choice" to select the solutions and platform that best meets the business requirements of manufacturers and distributors. The comprehensive CRM Industrial application suite is available for use on premise or on-demand as a hosted service. Industrial leaders often build departmental lean CRM solutions with the award winning Commence Lean Industrial CRM Framework. These choices are why so many industrial companies choose Commence as the solution for managing customer relationships. All Commence Industrial solutions support mobile or wireless connectivity and integration to back-office accounting and ERP systems.

For more information on this press release visit: http://www.releasewire.com/press-releases/release-9010.htm

Media Relations Contact

Larry Caretsky
President
Commence
Telephone: 732-380-9100
Email: Click to Email Larry Caretsky
Web: http://www.commence.com/mfg/


          Lean Industrial CRM Data Integration Lean by Commence Corporation   

Lean Industrial CRM Data Integration Lean by Commence Corporation

Tinton Falls, NJ -- (ReleaseWire) -- 10/23/2006 -- According to Larry Caretsky, President of Commence Corporation (www.commence.com/mfg./), "Integration is a word many vendors are afraid of, and with good reason if their software can't support it. But industrial distributors and manufacturers need to carefully think about integration before they write it off as a "nice-to-have". Depending on your functional requirements, integration may be a necessity."

There are two main kinds of integration to consider: back-end and contact management. If manufacturers choose to integrate sales and marketing database with back-end accounting, ERP, or manufacturing software, then the ability to provide the sales team with a complete view of the customer, potentially including order history, pricing requests, and ship dates. Integration can also significantly help with quote entry, follow-up, and tracking. By increasing the availability of product and pricing information, important tools are given to an industrial sales force to serve customers while gaining efficiencies. Once a manufacturers has a significant volume of reliable data, it can start to feed the data entered in the sales system to the back-end system to facilitate improved purchasing and manufacturing forecasting.

Caretsky suggests that Lean CRM is has many important variables. "If you choose to integrate contact management, then all relevant contact-related data, including calendar, will be available in software such as Microsoft Outlook and able to synchronize to the sales team's PDAs or Pocket PCs. The main driver behind contact management integration is to make sure users don't have to type the same piece of data more than once, an obvious waste of time, and something that most sales people are unlikely to actually do. "

In an effort to help industrial distributors and manufacturers thrive, Commence Corporation presents Practices That Pay: Leveraging Information to Achieve Industrial Selling Results, a compendium of smart practices from the leading industrial sales and marketing experts and organizations that are growing in today's challenging environment.

Commence offers lean industrial companies complete "Freedom Of Choice" to select the solutions and platform that best meets the business requirements of manufacturers and distributors. The comprehensive CRM Industrial application suite is available for use on premise or on-demand as a hosted service. Industrial leaders often build departmental lean CRM solutions with the award winning Commence Lean Industrial CRM Framework. These choices are why so many industrial companies choose Commence as the solution for managing customer relationships. All Commence Industrial solutions support mobile or wireless connectivity and integration to back-office accounting and ERP systems.

For more information on this press release visit: http://www.releasewire.com/press-releases/release-8736.htm

Media Relations Contact

Larry Caretsky
President
Commence
Telephone: 732-380-9100
Email: Click to Email Larry Caretsky
Web: http://www.commence.com/mfg/


          AMD EPYC 7601 CPU Hammers SiSoft Sandra Benchmark Database With 32 Cores And 64 Threads   
AMD EPYC 7601 CPU Hammers SiSoft Sandra Benchmark Database With 32 Cores And 64 Threads It feels a little weird to write about performance results for AMD's EPYC processors and not have to tie the word "leak" into it. As we covered just last week, AMD has finally unleashed its hugely anticipated EPYC processor line for the server market, and to say it's long overdue would be a gross understatement. There is no doubt that Ryzen
          Build a Website by amiragomaa   
Hi there, I got a website build in dreamweaver which needs to be updated. I would need to add on a flight & hotel booking system, a live chat & some parallax scrawling on the landing page. The database I have is SQL (XML links) Anyone able to help? Please reply with pricing... (Budget: £20 - £250 GBP, Jobs: Adobe Dreamweaver, Website Design)
          GeoDataSource World Cities Database (Premium Edition) July.2017   
GeoDataSource World Cities Database with Latitude Longitude Information
          Database Tour Pro 8.2.4.33   
Cross-database tool with integrated report builder
          IP2Location IP-COUNTRY-REGION-CITY-ISP Database July.2017   
IP-COUNTRY-REGION-CITY-ISP translates IP address to country, region, city and IS
          GeoDataSource World Cities Database (Platinum Edition) July.2017   
GeoDataSource World Cities Database with Population Information
          Quick Heal Virus Database 17.00(30 June 2   
Offers you the latest virus definitions you can use to manually update
          Dbvisit Replicate 2.9.00   
Create a duplicate for your Oracle database with this program
          RankChart - Compare Site Performance, Alexa History Charts and Statistics   
RankChart History Database. Compare your site rank position to competitors. Track your website performance even if it's under 300k. Find top sites.
          Chinese Firm Writes First SMS Worm   

Ah another first, and once again China is at the forefront! We recently reported about a Chinese company sharing their huge malware database and now a group of Chinese companies has managed to develop the first SMS worm! It’s a pretty cool concept, abusing the Symbian Express Signing procedure. It reminds me of the heydays […]

The post Chinese Firm Writes First SMS Worm appeared first on Darknet - The Darkside.


          Chinese Company Shares Huge Malware Database   

We need more companies like this that acknowledge hoarding data isn’t doing anything for the greater good, to really stamp out the core problems you have to share the data you’ve correlated across the World so everyone can put together what they have and do something about it. It seems like with China pumping out […]

The post Chinese Company Shares Huge Malware Database appeared first on Darknet - The Darkside.


          Comment on sp_SrvPermissions & sp_DBPermissions V6.0 Finally! by Albert   
Hi , Great Scripts. But I have found an issue, when you run sp_DBpermissions @DBName ='ALL' in a server with Databases in an AlwaysOn Group that are the replica in the server and are configured in ReadOnly mode. "Msg 978, Level 14, State 1, Line 17 The target database ('TEST_AG') is in an availability group and is currently accessible for connections when the application intent is set to read only. For more information about application intent, see SQL Server Books Online." To solve this, I have changed the lines like [code language="sql"]FOR SELECT name FROM sys.databases ORDER BY name[/code] By [code language="sql"]FOR SELECT d.name FROM sys.databases d LEFT OUTER JOIN sys.dm_hadr_availability_replica_states hars ON d.replica_id = hars.replica_id WHERE ISNULL(hars.role_desc, 'PRIMARY') = 'SECONDARY' ORDER BY name[/code] And seems that it works fine. TIA
          (IT) Software Developer - Highly Skilled   

Location: Englewood Cliffs, NJ   

Job Title: Software Developer - Highly Skilled Qualifications: The CNBC Digital Technology team is seeking a Software Engineer to manage and build software solutions across CNBC's Digital Platform. Software engineer (primarily focusing on Backend development) will be responsible for building and managing software solutions for various projects. This role requires hands-on software development skills, deep technical expertise in web development, especially in developing with core java, spring, hibernate. Software engineer will be required to provide estimates for his tasks, follow technology best practices, participate and adhere to CNBC's Technical Design Review Process, Performance metrics/scalability, support integration and release planning activities in addition to being available for level 3 support to triage production issues. Required Skills " BS degree or higher in Computer Science with a minimum of 5+ years of relevant, broad engineering experience is required. " Experience with various Web-based Technologies, OO Modeling, Middleware, Relational Databases and distributed computing technologies. " Experience in Digital Video workflows (Ingest, Transcode, Publish) " Experience in Content Delivery Networks (CDN) " Experience with Video Content Management Systems " Expertise in cloud transcoding workflows. " Demonstrated experience running projects end-to-end " Possess expert knowledge in Performance, Scalability, Security, Enterprise System Architecture, and Engineering best practices. " Experience working on large scale, high traffic web sites/applications. " Experience working in financial, media domain. Responsibilities: Languages and Software: " Languages : JAVA (Core Java, Multithreading), Object Oriented languages 3Z 4 Web Technologies: XML, JSON, HTML, CSS, OO JavaScript, jQuery, AJAX, SOAP and RESTful web services " Framework : MVC Framework like Spring, JPA, Hibernate, Jaxb " Database : RDBMS like MySQL, Oracle, NO SQL databases " Tools : Git, SVN, Eclipse, Jira
 
Type: Contract
Location: Englewood Cliffs, NJ
Country: United States of America
Contact: Hiring Manager
Advertiser: First Tek
Reference: NT17-03957

          (IT) Full Stack Developer   

Rate: £350 - £450 per Day   Location: Glasgow, Scotland   

Full Stack Developer - 12 month contract - Glasgow City Centre One of Harvey Nash's leading FS clients is looking for an experienced full stack developer with an aptitude for general infrastructure knowledge. This will be an initial 12 month contract however the likelihood of extension is high. The successful candidate will be responsible for creating strategic solutions across a broad technology footprint. Experience within financial services would be advantageous, although not a prerequisite. Skill Set: - Previous Experience full-stack development experience with C#/C++/Java, Visual Studio, .Net, Windows/Linux web development - Understanding of secure code development/analysis - In-depth knowledge of how software works - Development using SQL and Relational Databases (eg SQL, DB2, Sybase, Oracle, MQ) - Windows Automation and Scripting (PowerShell, WMI) - Familiarity with common operating systems and entitlement models (Windows, Redhat Linux/Solaris) - Understanding of network architecture within an enterprise environment (eg Firewalls, Load Balancers) - Experience of developing in a structured Deployment Environment (DEV/QA/UAT/PROD) - Familiarity with the Software Development Life Cycle (SDLC) - Experience with Source Control and CI systems (eg GIT, Perforce, Jenkins) - Experience with Unit and Load testing tools - Experience with Code Review products (eg Crucible, FishEye) - Excellent communication/presentation skills and experience working with distributed teams - Candidates should demonstrate a strong ability to create technical, architectural and design documentationDesired Skills - Any experience creating (or working with) a "developer desktop" (dedicated desktop environment for developers) - Experience of the Linux development environment - An interest in cyber security - Knowledge of Defense in Depth computing principles - Experience with security products and technologies(eg Cyberark, PKI) - Systems management, user configuration and technology deployments across large, distributed environments (eg Chef, Zookeeper) - Understanding of core Windows Infrastructure technologies (eg Active Directory, GPO, CIFS, DFS, NFS) - Monitoring Tools (eg Scom, Netcool, WatchTower) - Experience with Apache/Tomcat-web server "Virtualisation" - Design patterns and best practices - Agile development: Planning, Retrospectives etc. To apply for this role or to discuss it in more detail then please call me and send a copy of your latest CV.
 
Rate: £350 - £450 per Day
Type: Contract
Location: Glasgow, Scotland
Country: UK
Contact: Cameron MacGrain
Advertiser: Harvey Nash Plc
Start Date: ASAP
Reference: JS-329601/001

          (IT) Hadoop Architect/Developer   

Location: Foster City, CA   

Key Responsibilities: Visa is currently seeking for a Senior Hadoop Architect/Developer with extensive experience in RDBMS data modelling/dev with Tableau developer experience in Finance area to deliver Corporate Analytics new strategic framework initiative. This BI platform provides analytical/operational capability to various business domains that are to be used by Corporate Finance Systems. This Developer role will be primarily responsible for designing, developing and implementing Hadoop framework ETL using relational databases and use Tableau reporting on it. The new Hadoop framework to be used to build from the scratch Oracle Financial analytics/P2P/Spend/Fixed asset solution into Hadoop framework from OBIA. The individual should have a finance business background with extensive experience in OBIA Fixed Assets, P2P, Financial analytics, Spend Analytics, and Projects. Expert in Hadoop framework components like Sqoop, Hive, Impala, Oozie, Spark, HBase, HDFS.. " Architect, Design and implement column family schemas of Hive and HBase within HDFS. Assign schemas and create Hive tables. Managing and deploying HDFS HBase clusters. " Develop efficient pig and hive scripts with joins on datasets using various techniques. Assess the quality of datasets for a hadoop data lake. Apply different HDFS formats and structure like Parquet, Avro, etc. to speed up analytics " Fine tune hadoop applications for high performance and throughput. Troubleshoot and debug any hadoop ecosystem run time issues " Hands on experience in configuring, and using Hadoop ecosystem components like Hadoop MapReduce, HDFS, HBase, Hive, Sqoop, Spark, Impala, Pig, Oozie, Zookeeper and Flume. " Desired candidate should have strong programming skills on Scala or Python to work on Spark " Experience in converting core ETL logics using PySpark SQL or Scala language " Good experience on Apache Hadoop Map Reduce programming, PIG Scripting and Distribute Application and HDFS. " In-depth understanding of Data Structure and Algorithms. " Experience in managing and reviewing Hadoop log files. " Implemented in setting up standards and processes for Hadoop based application design and implementation. " Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems and vice-versa. " Experience in Object Oriented Analysis, Design (OOAD) and development of software using UML Methodology, good knowledge of J2EE design patterns and Core Java design patterns. " Experience in managing Hadoop clusters using Cloudera Manager tool. " Very good experience in complete project life cycle (design, development, testing and implementation) of Client Server and Web applications " Experience in connecting Hadoop framework components to Tableau reporting " Expert in Tableau data blending and data modeling. " Create Functional and Technical design documentation " Perform Unit and QA testing data loads and development of scripts for data validation " Support QA, UAT, SIT and
 
Type: Contract
Location: Foster City, CA
Country: United States of America
Contact: Baljit Gill
Advertiser: Talentburst, Inc.
Reference: NT17-11842

          (IT) Java Tech Specialist/Developer/Engineer   

Location: Belfast, Northern Ireland   

Java Tech Specialist/Developer/Engineer This is a challenging and exciting opportunity to work within a leading banking environment and work closely with a Front Office business unit. The quality trading is now enhancing our leading position in the financial market. The successful candidate will play a key role in that success and be part of a large group of high calibre developers. The successful candidate must be able to operate with a high level of self-motivation and produce results with a quick turn around on key deliverable's. Key Responsibilities: Requirements analysis and capture, working closely with the business users and other technology teams to define solutions. Development of the Electronic Execution applications and components, as part of a global development effort. Liaison with support and development teams. Third line support of the platform during trading hours. Applying Equities financial product knowledge to the full development life cycle. Defining and evolving architectural standards; promoting adherence to these standards. Technical mentoring of junior team members. Knowledge and Experience: Strong Java Background Unix/Linux OO programming Good relational database experience Working SQL knowledge We have both permanent and contract positions available so please do not hesitiate to apply If you apply for this role, our thanks for your interest. However, due to high level of applicants expected we are unable to respond to every one. Therefore, if you have not heard from Eurobase People within 5 working days, then, unfortunately, your application has been unsuccessful. Eurobase People are acting as an Employment Agency.
 
Type: Unspecified
Location: Belfast, Northern Ireland
Country: UK
Contact: Adam Cohen
Advertiser: Eurobase People
Email: Adam.Cohen.F5223.39364@apps.jobserve.com
Start Date: ASAP
Reference: JS-TS

          JBoss Tools Team: JBoss Tools 4.5.0.AM1 for Eclipse Oxygen.0   

Happy to announce 4.5.0.AM1 (Developer Milestone 1) build for Eclipse Oxygen.0.

Downloads available at JBoss Tools 4.5.0 AM1.

What is New?

Full info is at this page. Some highlights are below.

Server Tools

EAP 7.1 Server Adapter

A server adapter has been added to work with EAP 7.1. It’s currently released in Tech-Preview mode only, since the underlying WildFly 11 continues to be under active development with substantial opportunity for breaking changes. This new server adapter includes support for incremental management deployment like it’s upstream WildFly 11 counterpart.

Removal of Event Log and other Deprecated Code

The Event Log view has been removed. The standard eclipse log is to be used for errors and other important messages regarding errors during server state transitions.

Hibernate Tools

Hibernate Search Support

We are glad to announce the support of the Hibernate Search. The project was started by Dmitrii Bocharov in the Google Summer Code program and has been successfully transferred in the current release of the JBoss Tools from Dmitrii’s repository into the jbosstools-hibernate repository and has become a part of the JBoss family of tools.

Functionality

The plugin was thought to be some kind of a Luke tool inside Eclipse. It was thought to be more convenient than launching a separate application, and picks up the configuration directly from your Hibernate configuration.

Two options were added to the console configurations submenu: Index Rebuild and Index Toolkit. They become available when you use hibernate search libraries (they exist in the build path of your application, e.g. via maven).

Configuration menu items
Index Rebuild

When introducing Hibernate Search in an existing application, you have to create an initial Lucene index for the data already present in your database.

The option "Index Rebuild" will do so by re-creating the Lucene index in the directory specified by the hibernate.search.default.indexBase property.

Hibernate Search indexed entities
Hibernate Search configuration properties
Index Toolkit

"Open Index Toolkit" submenu of the console configuration opens an "Index Toolkit" view, which has three tabs: Analyzers, Explore Documents, Search.

Analyzers

This tab allows you to view the result of work of different Lucene Analyzers. The combo-box contains all classes in the workspace which extend org.apache.lucene.analysis.Analyzer, including custom implementations created by the user. While you type the text you want to analyse, the result immediately appears on the right.

Analyzers
Explore Documents

After creating the initial index you can now inspect the Lucene Documents it contains.

All entities annotated as @Indexed are displayed in the Lucene Documents tab. Tick the checkboxes as needed and load the documents. Iterate through the documents using arrows.

Lucene Documents inspection
Searching

The plugin passes the input string from the search text box to the QueryParser which parses it using the specified analyzer and creates a set of search terms, one term per token, over the specified default field. The result of the search pulls back all documents which contain the terms and lists them in a table below.

Search tab

Demo

Docker

Docker Client Upgrade

The version of docker-client used by the Docker Tooling plug-ins has been upgraded to 6.1.1 for the 3.0.0 release of the Docker Tooling feature.

Forge

Forge Runtime updated to 3.7.1.Final

The included Forge runtime is now 3.7.1.Final. Read the official announcement here.

startup

Enjoy!

Jeff Maury


          World-Check concedes wrongful Palestine Solidarity Campaign listing on ‘terrorism’ database   
Bethlehem/PNN/ The PSC has welcomed an acknowledgement from World-Check that PSC should never have been placed on the database at all and, specifically, should not have been associated with “terrorism”. There were no grounds to suggest that either were associated with terrorism related activity or that the organisation presented any kind of financial risk. In …
          alpharooq/cli (1.0.0)   
PHP database framework
          Apache HBase: The NoSQL Database for Hadoop and Big Data   

Use HBase when you need random, real-time read/write access to your Big Data. The goal of the HBase project is to host very large tables — billions of rows multiplied by millions of columns — on clusters built with commodity hardware. HBase is an open-source, distributed, versioned, column-oriented store modeled after Google’s Bigtable. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of Hadoop and HDFS.

Refcardz are FREE cheat sheets made just for developers. It’s the easy way to stay on top of the newest technologies!



Request Free!

          SEO Team Leader - Crayon Infotech - India   
Understanding of technical aspects of Internet marketing, including database-driven functionalities, html, ftp, pixels, general web functionality, etc....
From Crayon Infotech - Sat, 24 Jun 2017 06:25:24 GMT - View all India jobs
          Creating a SharePoint DataLake with SQL Server using Enzo Unified   

Originally posted on: http://geekswithblogs.net/hroggero/archive/2017/06/19/creating-a-sharepoint-datalake-with-sql-server-using-enzo-unified.aspx

In this blog post I will show how you can easily copy a SharePoint list to a SQL Server table, and keep the data updated one a specific frequency, allowing you to easily create a DataLake for your SharePoint lists. This will work with SharePoint 2013 and higher, and with SharePoint Online. While you can spend a large amount of time learning the SharePoint APIs and its many subtleties, it is far more efficient to configure simple replication jobs that will work under most scenarios.

The information provided in this post will help you get started in setting a replication of SharePoint lists to a SQL Server database, so that you can query the local SQL Server database from Excel, Reporting tools, or even directly to the database. You should also note that Enzo Unified provides direct real-time access to SharePoint Lists through native SQL commands so you can view, manage and update SharePoint List Items.

image

 

Installing Enzo Unified

To try the steps provided in this lab, you will need the latest version of Enzo Unified (1.7 or higher) provided here:  http://www.enzounified.com/download. The download page also contains installation instructions.

Enzo Unified Configuration

Once Enzo Unified has been installed, start Enzo Manager (located in the Enzo Manager directory where Enzo was installed). Click on File –> Connect and enter the local Enzo connection information. 

NOTE:  Enzo Unified is a Windows Service that looks like SQL Server; you must connect Enzo Manager to Enzo Unified which by default is running on port 9550. The password should be the one you specified during the installation steps. The following screen shows typical connection settings against Enzo Unified:

image

Create Connection Strings

Next, you will need to create “Central Connection Strings” so that Enzo will know how to connect to the source system (SharePoint) and the destination database (SQL Server). You manage connection strings from the Configuration –> Manage Connection Strings menu. In the screen below, you can see that a few connection strings have been created. The first one is actually a connection string to Enzo Unified, which we will need later.

image

The next step is to configure the SharePoint adapter by specifying the credentials used by Enzo Unified. Configuring the SharePoint adapter is trivial: three parameters are needed: a SharePoint login name, the password for the login, and the URL to your SharePoint site. You should make sure the login has enough rights to access SharePoint lists and access SharePoint Fields.

image

Once the configuration to the SharePoint site is complete, you can execute commands against Enzo Unified using SQL Server Management Studio.

Fetch records from SharePoint using SQL Server Management Studio

To try the above configuration, open SQL Server Management Studio, and connect to Enzo Unified (not SQL Server). From the same machine where Enzo is running, a typical connection screen looks like this:

image

Once you are connected to Enzo Unified, and assuming your SharePoint site has a list called Enzo Test, you can run simple SQL commands like this:

SELECT * FROM SharePoint.[list@Enzo Test]

Create a Virtual Table

You will also need to create a Virtual Table in Enzo so that the SharePoint list looks like a table in Enzo Unified. A Virtual Table is made of columns that match the SharePoint list you want to replicate. To do this, open Enzo Manager, select the SharePoint adapter, and create a new Virtual Table by clicking on the NEW icon; provide a name for the Virtual Table, and select the columns to create through a picker. In the example below, I am creating a Virtual Table called vEnzoTest, which mirrors a SharePoint List called ‘Enzo Test’.

image

The picker allows you to execute the SQL command to validate it is working. Clicking OK will automatically add all the requested columns to the Virtual Table.

Make sure to pick the ID and Modified columns; this will be required later.

image

Once completed, I can run commands like this against the SharePoint adapter using SQL Server Management Studio:

SELECT * FROM SharePoint.vEnzoTest

SELECT * FROM SharePoint.vEnzoTest WHERE Title ID > 100

The difference with the previous SQL command is that the virtual table will only return the columns specified by the Virtual Table.

Configure Data Sync Jobs

Once the Virtual Table has been created, you can add new Jobs to copy the SharePoint data into SQL Server, and keep updates synchronized with the SQL Server table. A simple configuration screen allows you to setup the data sync jobs. You can choose which operations to replicate, the destination table, and a schedule for data updates. In the example below I am setting up a 3 data sync jobs of the vPosts Virtual Table: initialization, insert/update, and delete, updated every 5 minutes.

image

You can also use Enzo Manager to monitor the data sync jobs, or run them manually.

Once the jobs have been created, you can simply connect to the SQL Server database (not Enzo Unified) and see the replicated data. For example, you can connect to SQL Server, and run the following statement assuming the above destination table (as shown in the screenshot) has been created.

SELECT * FROM DataLake.SharePoint.vPosts

Conclusion

This post shows you how to easily configure Enzo Unified to replicate SharePoint lists to a local SQL Server database to enable reporting and other data integration projects such as a Data Lake.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Enzo Unified (http://www.enzounified.com/). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.


          Writing a Voice Activated SharePoint Todo List - IoT App on RPi   

Originally posted on: http://geekswithblogs.net/hroggero/archive/2017/05/16/writing-a-voice-activated-sharepoint-todo-list---iot-app.aspx

Ever wanted to write a voice activated system on an IoT device to keep track of your “todo list”, hear your commands being played back, and have the system send you a text message with your todo list when it’s time to walk out the door?  Well, I did. In this blog post, I will provide a high level overview of the technologies I used, why I used them, a few things I learned along the way, and partial code to assist with your learning curve if you decide to jump on this.  I also had the pleasure of demonstrating this prototype at Microsoft’s Community Connections in Atlanta in front of my colleagues.

How It Works

I wanted to build a system using 2 Raspberry Pis (one running Windows 10 IoT Core, and another running Raspbian) that achieved the following objectives:

  • * Have 2 RPis that communicate through the Azure Service Bus
    This was an objective of mine, not necessarily a requirement; the intent was to have two RPis running different Operating Systems communicate asynchronously without sharing the same network
  • * Learn about the Microsoft Speech Recognition SDK
    I didn’t want to send data to the cloud for speech recognition; so I needed an SDK on the RPi to perform this function; I chose the Microsoft Speech Recognition SDK for this purpose

    * Communicate to multiple cloud services without any SDK so that I could program the same way on Windows and Raspbian (Twilio, Azure Bus, Azure Table, SharePoint Online)
    I also wanted to minimize the learning curve of finding which SDK could run on a Windows 10 IoT Core, and Raspbian (Linux); so I used Enzo Unified to abstract the APIs and instead send simple HTTPS commands allowing me to have an SDK-less development environment (except for the Speech Recognition SDK). Seriously… go find an SDK for SharePoint Online for Raspbian and UWP (Windows 10 IoT Core).

The overall solution looks like this:

image

Technologies

In order to achieve the above objectives, I used the following bill of materials:

Technology Comment Link
2x Raspberry Pi 2 Model B Note that one RPi runs on Windows 10 IoT Core, and the other runs Raspbian http://amzn.to/2qnM6w7
Microphone I tried a few, but the best one I found for this project was the Mini AKIRO USB Microphone http://amzn.to/2pGbBtP
Speaker I also tried a few, and while there is a problem with this speaker on RPi and Windows, the Logitech Z50 was the better one http://amzn.to/2qrNkop
USB Keyboard I needed a simple way to have keyboard and mouse during while traveling, so I picked up the iPazzPort Mini Keyboard; awesome… http://amzn.to/2rm0FOh
Monitor You can use an existing monitor, but I also used the portable ATian 7 inch display. A bit small, but does the job. http://amzn.to/2pQ5She 
IoT Dashboard Utility that allows you to manage your RPis running Windows; make absolutely sure you run the latest build; it should automatically upgrade, but mine didn’t. http://bit.ly/2rmCWOU
Windows 10 IoT Core The Microsoft O/S used on one of the RPis; Use the latest build; mine was 15063; if you are looking for instructions on how to install Windows from a command prompt, the link provided proved useful  http://bit.ly/2pG9gik
Raspbian Your RPi may be delivered with an SD card preloaded with the necessary utilities to install Raspbian; connecting to a wired network makes the installation a breeze. http://bit.ly/2rbnp7u
Visual Studio 2015 I used VS2015, C#, to build the prototype for the Windows 10 IoT Core RPi http://bit.ly/2e6ZGj5
Python 3 On the Raspbian RPi, I used Python 3 to code. http://bit.ly/1L2Ubdb
Enzo Unified I installed and configured an Enzo Unified instance (version 1.7) in the Azure cloud; for Enzo to talk to SharePoint Online, Twilio, Azure Service Bus and Azure Storage, I also needed accounts with these providers. You can try Enzo Unified for free for 30 days. http://bit.ly/2rm4ymt

 

Things to Know

Creating a prototype involving the above technologies will inevitably lead you to collect a few nuggets along the way. Here are a few.

Disable Windows 10 IoT Core Updates

While disabling updates is generally speaking not recommended, IoT projects usually require a predictable environment that does not reboot in the middle of a presentation. In order to disable Windows Updates on this O/S I used information published Mike Branstein on his blog: http://bit.ly/2rcOXt9

Try different hardware, and keep your receipts…

I had to try a few different components to find the right ones; the normally recommended S-150 USB Logitech speakers did not work for me; I lost all my USB ports and network connectivity as soon as I plugged it in. Neither did the JLab USB Laptop speakers. I also tried the 7.1 Channel USB External Sound Card but was unable to make it work (others were successful). For audio input, I also tried the VAlinks Mini Flexible USB microphone; while it worked well, it picked up too much noise compared to the AKIRO, and became almost unusable in a room with 20 people where you have background noise.

Hotel WiFi Connections

This was one of the most frustrating part of this whole experience on Windows 10 IoT Core. You should know that this operating system does not currently come equipped with a browser. This means that you cannot easily connect to a hotel network since this usually requires starting a browser so that you can enter a user id and password provided by the hotel. Further more, since there is also no possible way to “forget” a previously registered network, you can find yourself in a serious bind… I first purchased the Skyroam Mobile Hotspot, hoping it would provide the answer. Unfortunately the only time I tried it, in Tampa Florida, the device could not obtain a connection. So I ended up adding a browser object into my UWP application and force it to refresh a specific image every time I start the app; this will force the hotel login page to show up when needed. I am still looking for a good solution to this problem.

Speech Privacy Policy on Windows

Because parts of the code I am running leverages the underlying APIs of Cortana, it seems that you must accept the Cortana privacy policy; this is required only the first time you run the application, but is obviously a major nightmare for applications you may want to ship. I am not aware of any programmatic workaround at this time. This stackoverflow post provides information about this policy and how to accept it.

How It Looks Like

A picture is worth a thousand words… so here is the complete setup:

20170502_225941

C# Code

Since this is an ongoing prototype I will not share the complete code at this time; however I will share a few key components/techniques I used to make this work.

Speech Recognition

I used both continuous dictation speech recognition, and grammar-based recognition from the Microsoft Speech Recognition API. The difference is that the first one gives you the ability to listen to “anything” being said, and the other will only give you a set of results that match the expected grammar. Both methods give you a degree of confidence so you can decide if the command/text input was sufficiently clear. The following class provides a mechanism for detecting input either through continuous dictation or using a grammar file. The timeout ensures that you do not wait forever. This code also returns the confidence level of the capture.

 

using Enzo.UWP;
using System;
using System.Collections.Generic;

using System.Diagnostics;
using System.Net.Http;
using System.Threading.Tasks;
using Windows.ApplicationModel;
using Windows.Devices.Gpio;
using Windows.Media.SpeechRecognition;
using Windows.Media.SpeechSynthesis;
using Windows.Storage;

namespace ClientIoT
{

    public class VoiceResponse
    {
        public string Response = null;
        public double RawConfidence = 0;
    }

    public class VoiceInput
    {
        private const int SPEECH_TIMEOUT = 3;
        private System.Threading.Timer verifyStatus;
        private string lastInput = "";
        private double lastRawConfidence = 0;
        private bool completed = false;
        private bool success = false;

        public async Task<VoiceResponse> WaitForText(string grammarFile)
        {
            return await WaitForText(SPEECH_TIMEOUT, grammarFile);
        }

        public async Task<VoiceResponse> WaitForText(int timeout = SPEECH_TIMEOUT, string grammarFile = null)
        {
            var resp = new VoiceResponse();
            try
            {
                success = false;
                completed = false;
                lastInput = "";
                lastRawConfidence = 0;

                SpeechRecognizer recognizerInput;
                DateTime dateNow = DateTime.UtcNow;

                recognizerInput = new SpeechRecognizer();
                recognizerInput.ContinuousRecognitionSession.ResultGenerated += ContinuousRecognitionSession_InputResultGenerated;
                recognizerInput.StateChanged += InputRecognizerStateChanged;
                recognizerInput.Timeouts.BabbleTimeout = TimeSpan.FromSeconds(timeout);
                recognizerInput.ContinuousRecognitionSession.Completed += ContinuousRecognitionSession_Completed;
                recognizerInput.ContinuousRecognitionSession.AutoStopSilenceTimeout = TimeSpan.FromSeconds(SPEECH_TIMEOUT);
                recognizerInput.Constraints.Clear();

                if (grammarFile != null)
                {
                    StorageFile grammarContentFile = await Package.Current.InstalledLocation.GetFileAsync(grammarFile);
                    SpeechRecognitionGrammarFileConstraint grammarConstraint = new SpeechRecognitionGrammarFileConstraint(grammarContentFile);
                    recognizerInput.Constraints.Add(grammarConstraint);
                }

                var compilationResult = await recognizerInput.CompileConstraintsAsync();

                // If successful, display the recognition result.
                if (compilationResult.Status != SpeechRecognitionResultStatus.Success)
                {
                    Debug.WriteLine(" ** VOICEINPUT - VoiceCompilationError - Status: " + compilationResult.Status);
                }

                recognizerInput.ContinuousRecognitionSession.AutoStopSilenceTimeout = TimeSpan.FromSeconds(timeout);
                recognizerInput.RecognitionQualityDegrading += RecognizerInput_RecognitionQualityDegrading;
                await recognizerInput.ContinuousRecognitionSession.StartAsync();

                System.Threading.SpinWait.SpinUntil(() =>
                    completed
                );
               
                resp = new VoiceResponse() { Response = lastInput, RawConfidence = lastRawConfidence };
               
                try
                {
                    recognizerInput.Dispose();
                    recognizerInput = null;
                }
                catch (Exception ex)
                {
                    Debug.WriteLine("** WaitForText (1) - Dispose ** " + ex.Message);
                }
            }
            catch (Exception ex2)
            {
                Debug.WriteLine("** WaitForText ** " + ex2.Message);
            }
            return resp;
        }

        private void RecognizerInput_RecognitionQualityDegrading(SpeechRecognizer sender, SpeechRecognitionQualityDegradingEventArgs args)
        {
            try
            {
                Debug.WriteLine("VOICE INPUT - QUALITY ISSUE: " + args.Problem.ToString());
            }
            catch (Exception ex)
            {
                Debug.WriteLine("** VOICE INPUT - RecognizerInput_RecognitionQualityDegrading ** " + ex.Message);
            }
        }

        private void ContinuousRecognitionSession_Completed(SpeechContinuousRecognitionSession sender, SpeechContinuousRecognitionCompletedEventArgs args)
        {
            if (args.Status == SpeechRecognitionResultStatus.Success
                || args.Status == SpeechRecognitionResultStatus.TimeoutExceeded)
                success = true;
            completed = true;
           
        }

        private void ContinuousRecognitionSession_InputResultGenerated(SpeechContinuousRecognitionSession sender, SpeechContinuousRecognitionResultGeneratedEventArgs args)
        {
            try
            {
                lastInput = "";
                if ((args.Result.Text ?? "").Length > 0)
                {
                    lastInput = args.Result.Text;
                    lastRawConfidence = args.Result.RawConfidence;
                    Debug.WriteLine(" " + lastInput);
                }
            }
            catch (Exception ex)
            {
                Debug.WriteLine("** ContinuousRecognitionSession_InputResultGenerated ** " + ex.Message);
            }
        }

        private void InputRecognizerStateChanged(SpeechRecognizer sender, SpeechRecognizerStateChangedEventArgs args)
        {
            Debug.WriteLine("  Input Speech recognizer state: " + args.State.ToString());
        }
    }
}

For example, if you want to wait for a “yes/no” confirmation, with a 3 second timeout, you would call the above code as such:

var yesNoResponse = await (new VoiceInput()).WaitForText(3, YESNO_FILE);

And the yes/no grammar file looks like this:

<?xml version="1.0" encoding="utf-8" ?>
<grammar
  version="1.0"
  xml:lang="en-US"
  root="enzoCommands"
  xmlns="http://www.w3.org/2001/06/grammar"
  tag-format="semantics/1.0">

  <rule id="root">
    <item>
      <ruleref uri="#enzoCommands"/>
      <tag>out.command=rules.latest();</tag>
    </item>
  </rule>

  <rule id="enzoCommands">
    <one-of>
      <item> yes </item>
      <item> yep </item>
      <item> yeah </item>
      <item> no </item>
      <item> nope </item>
      <item> nah </item>
    </one-of>
  </rule>

</grammar>

Calling Enzo Unified using HTTPS to Add a SharePoint Item

Another important part of the code is its ability to interact with other services through Enzo Unified, so that no SDK is needed on the UWP application. For an overview on how to access SharePoint Online through Enzo Unified, see this previous blog post.

The following code shows how to easily add an item to a SharePoint list through Enzo Unified. Posting this request to Enzo requires two parameters (added as headers) called “name” and “data” (data is an XML string containing the column names and values to be added as a list item).

public static async Task SharePointAddItem(string listName, string item)
{
            string enzoCommand = "/bsc/sharepoint/addlistitemraw";
            List<KeyValuePair<string, string>> headers = new List<KeyValuePair<string, string>>();

            string data = string.Format("<root><Title>{0}</Title></root>", item);

            headers.Add(new KeyValuePair<string, string>("name", listName));
            headers.Add(new KeyValuePair<string, string>("data", data));

            await SendRequestAsync(HttpMethod.Post, enzoCommand, headers);
}

And the SendRequestAsync method below shows you how to call Enzo Unified. Note that I added two cache control filters to avoid HTTP caching, and additional flags for calling Enzo Unified on an HTTPS port where a self-signed certificate is installed.

private static async Task<string> SendRequestAsync(HttpMethod method, string enzoCommand, List<KeyValuePair<string, string>> headers)
{
            string output = "";
            var request = EnzoUnifiedRESTLogin.BuildHttpWebRequest(method, enzoCommand, headers);
           
            var filter = new Windows.Web.Http.Filters.HttpBaseProtocolFilter();
            if (IGNORE_UNTRUSTEDCERT_ERROR)
            {
                filter.IgnorableServerCertificateErrors.Add(Windows.Security.Cryptography.Certificates.ChainValidationResult.Untrusted);
                filter.IgnorableServerCertificateErrors.Add(Windows.Security.Cryptography.Certificates.ChainValidationResult.InvalidName);
            }
            filter.CacheControl.ReadBehavior = Windows.Web.Http.Filters.HttpCacheReadBehavior.MostRecent;
            filter.CacheControl.WriteBehavior = Windows.Web.Http.Filters.HttpCacheWriteBehavior.NoCache;

            Windows.Web.Http.HttpClient httpClient = new Windows.Web.Http.HttpClient(filter);

            try
            {
                using (var response = await httpClient.SendRequestAsync(request))
                {
                    output = await response.Content.ReadAsStringAsync();
                }
            }
            catch (Exception ex)
            {
                System.Diagnostics.Debug.WriteLine(" ** Send Http request error: " + ex.Message);
            }
            return output;
}

Last but not least, the BuildHttpWebRequest method looks like this; it ensures that the proper authentication headers are added, along with the authentication identifier for Enzo:

public static Windows.Web.Http.HttpRequestMessage BuildHttpWebRequest(Windows.Web.Http.HttpMethod httpmethod, string uri, List<KeyValuePair<string,string>> headers)
{
            bool hasClientAuth = false;

            Windows.Web.Http.HttpRequestMessage request = new Windows.Web.Http.HttpRequestMessage();

            request.Method = httpmethod;
            request.RequestUri = new Uri(ENZO_URI + uri);

            if (headers != null && headers.Count() > 0)
            {
                foreach (KeyValuePair<string, string> hdr in headers)
                {
                    request.Headers[hdr.Key] = hdr.Value;
                }
            }

            if (!hasClientAuth)
                request.Headers["authToken"] = ENZO_AUTH_GUID;

            return request;
}

Text to Speech

There is also the Text to Speech aspect, where the system speaks back what it heard, before confirming and acting on the command. Playing back is actually a bit strange in the sense that it requires a UI thread. In addition, it seems that Windows 10 IoT Core and Raspberry Pi don’t play nice together; it seems that every time a playback occurs, a loud tick can be heard before and after. A solution appears to be using USB speakers, but none worked for me. The code below simply plays back a specific text and waits a little while in an attempt to give enough time for the playback to finish (the code is non-blocking, so the SpinWait attempts to block the code until completion of the playback).

private async Task Say(string text)
{
            SpeechSynthesisStream ssstream = null;

            try
            {
                SpeechSynthesizer ss = new SpeechSynthesizer();
                ssstream = await ss.SynthesizeTextToStreamAsync(text);
            }
            catch (Exception exSay)
            {
                Debug.WriteLine(" ** SPEECH ERROR (1) ** - " + exSay.Message);
            }

            var task1 = this.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, async () =>
            {
                try
                {
                    await media.PlayStreamAsync(ssstream);
                }
                catch (Exception exSay)
                {
                    Debug.WriteLine(" ** SPEECH ERROR (2) ** - " + exSay.Message);
                }
            });

            // Wait a little for the speech to complete
            System.Threading.SpinWait.SpinUntil(() => 1 == 0, lastInput.Length * 150);

}

Calling the above code is trivial:

await Say("I am listening");

 

Python

The code in python was trivial to build; this RPi was responsible for monitoring events in the Azure Service Bus and turning on/off the LED attached to it. The following pseudo code shows how to call Enzo Unified from Python without using any SDK:

import sys
import urllib
import urllib2
import requests

enzourl_receiveMsg=”http://…/bsc/azurebus/receivedeletefromsubscription”
enzo_guid=”secretkeygoeshere”
topicName=”enzoiotdemo-general”
subName=”voicelight”

while 1=1
   try:
      headers={‘topicname’:topicName,
         ‘authToken’:enzoguid,
         ‘subname’:subName,
         ‘count’:1,
         ‘timeoutSec’:1
      }
      response=requests.get(enzourl_receiveMsg,headers=headers)
      resp=response.json()
      if (len(resp[‘data’][‘Table1’]) > 0
         #extract response here…

 

Conclusion

This prototype demonstrated that while there were a few technical challenges along the way, it was relatively simple to build a speech recognition engine that can understand commands using Windows 10 IoT Core, .NET, and the Microsoft Speech Recognition SDK. 

Further more, the intent of this project was also to demonstrate that Enzo Unified made it possible to code against multiple services without the need for an SDK on the client side regardless of the platform and the development language.  Abstracting SDKs through simple HTTP calls makes it possible to access Twilio, SharePoint Online, Azure services and much more without any additional libraries on the client system.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Enzo Unified (http://www.enzounified.com/). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.


          MS SQL - bcp to export varbinary to image file   

Originally posted on: http://geekswithblogs.net/vaibhavgaikwad/archive/2017/05/16/244551.aspx

I don't do much SQL in regular day to day basis but when it comes to it then it gets really exciting. Here is one of the odd days where I wanted to push out a image file from SQL which was sorted as varbinary(MAX) in database. As you all know that bcp is a very handy utility when it comes to dumping data out. So, I made that as a first choice but soon realized it was difficult to handle varbinary with the default arguments. Reading the internet here is what I could learn...

You need a format (.fmt) file for such an export. To generate the format file you need first go to the command prompt and perform the following:

D:\>bcp "select top 1 annotation from [DEMO_0]..[MasterCompany_Demographics_files]"  queryout d:\test.png -T

Enter the file storage type of field annotation [varbinary(max)]:{press enter}
Enter prefix-length of field annotation [8]: 0
Enter length of field annotation [0]:{press enter}
Enter field terminator [none]:{press enter}

Do you want to save this format information in a file? [Y/n] y {press enter}
Host filename [bcp.fmt]: annotation.fmt {press enter}

Starting copy...

1 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.) Total     : 1      Average : (1000.00 rows per sec.)


This will help you to generate your format file which then can be used to export out the images easily.

D:\>bcp "select top 1 annotation from [DEMO_0]..[MasterCompany_Demographics_files]"  queryout "d:\test.png" -T -f annotation.fmt

Starting copy...

1 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.) Total     : 15     Average : (66.67 rows per sec.)

I hope this helps someone...

          Getting SQL Server MetaData the EASY Way Using SSMS   

Originally posted on: http://geekswithblogs.net/NibblesAndBits/archive/2017/05/11/getting-sql-server-metadata-the-easy-way-using-ssms.aspx

So, you are asked to find out which table uses up the most index space in a database.  Or how many rows are in a give table.  Or any other question about the metadata of the server.  Many people will immediately jump into writing a T-SQL or Powershell script to get this information.  However, SSMS can give you all the information you need about any object on the server.  Enter, OBJECT EXPLORER DETAILS.  Simply go to View –> Object Explorer Details OR hit F7 to open the window.  Click on any level of the Object Explorer tree, and see all of the objects below it.  So, lets answer the first question above…..which table uses the most index space.  We’ll check the AdventureWorks2012 database and see:

 


image


 

Dammit Jim, that doesn’t tell us anything!  Well, hold on Kemosabe, there’s more.  Just like every other Microsoft tool, you can right-click in the column bar of the Object Explorer Details pane, and add the column[s] you want to see.  So, I’ll right-click on the column bar and select the “Index Space Used” column.  The columns can be sorted by clicking on the column name.  So, that makes our job a lot easier:

 


 

image

 


 

And, as we might have guessed, the dreaded SalesOrderDetail table uses the most Index Space.  And, we’ve found that out without writing a single line of code, and we can be sure the results are accurate.  I guess it’s possible that you are never asked about your metadata.  Maybe you’re thinking “That don’t confront me, as long as I get my paycheck next Friday” (George Thorogood reference….I couldn’t help it).  But wait, don’t answer yet……did I mention that the OED window can help us with bulk scripting?  Let’s say we want to make changes to several stored procedures, and we want to script them off before doing so.  We have a choice to make:

  • Right-click on each procedure and choose “Script Stored Procedure As”
  • Right-click on the database, select Tasks –> Generate Scripts, and then use the wizard to whittle down the list of objects to just the stored procedure you want
  • Highlight the stored procedures you want to script in the OED window and right-click, Script Stored Procedure As.

 


 

image

 


 

Yeah, ok that’s all well and good, but there really isn’t much time savings between the 3 options in that scenario.  I’ll concede that point, but consider this; Without using the OED window, there is no way to script non-database objects like Logins, Jobs, Alerts, Extended Events, etc, etc, etc without right-clicking on each individual object and choosing “Script As”.  In the OED, just highlight all the objects you want to script and right-click.  You’re done.  If you have 50 users you need to migrate to a new instance, or you want to move all of your maintenance jobs to a new server, that’s going to save a lot of time.

 

I’ve seen lots of people much smarter than I am go hog-wild writing scripts to answer simply questions like “which table has the most rows”.  Or, where is the log file from that table stored in the file system (you can find that out too).  Or what are the database collations on your server?  I’ve seen them opening multiple windows and cutting and pasting objects onto a new server.  While they were busy coding and filling in wizards, I hit F7 and got the job done in a few seconds.  Work smarter, not harder.  I hope this helps you in some way….thanks for reading.


          Adopt a SQL orphan   

Originally posted on: http://geekswithblogs.net/NibblesAndBits/archive/2017/05/04/adopt-a-sql-orphan.aspx

All DBAs have been asked to copy a database from one SQL instance to another.  Even if the instances are identical, the relationship (SID) between the logins and table users are broken.  Microsoft includes sp_change_users_login to help mitigate this problem, but using the stored procedure must be done database by database, or even user by user, which can take a significant amount of time.  Consider the time it takes to migrate an entire server, or create a secondary instance.

 

Why not automate the process.  Instead of taking hours, the following Powershell script will fix the orphans first, and then drop any table users that do not have a corresponding login.  You’ll be done in seconds instead of hours…..but that will be our little secret.  Winking smile

 


IMPORT-MODULE -name SQLPS -DisableNameChecking

$Server = 'Myserver'  
$ProdServer = new-object ('microsoft.sqlserver.management.smo.server') $server
$ProdLogins = $ProdServer.Logins.name


foreach ($database in $prodserver.Databases)
{
$db = $database.Name.ToString()
$IsSystem = $database.IsSystemObject
$ProdSchemas = $database.Schemas.name

    #######################################
    # Auto-Fix Orphaned user with a login                #
    #######################################
    foreach ($user in $database.users | select name, login | where{$_.name -in $ProdLogins -and $_.login -eq ""}) 
    {
    $sqlcmd = "USE " + $db + " EXEC sp_change_users_login 'Auto_Fix','" + $user.name + "'"
    invoke-sqlcmd -query $sqlcmd -serverinstance $Server
    }

    #######################################
    # Drop table users that have no login                 #
    #######################################
    foreach ($user in $database.users | select name, login | where{$_.name -notin $ProdLogins -and $_.name -notin $ProdSchemas -and $_.login -eq "" -and $db -notin ('SSISDB','ReportServer','ReportServerTempDb') -and $IsSystem -eq $false})
    {
    $sqlcmd = "USE [" + $db + "]
        IF  EXISTS (SELECT * FROM sys.database_principals WHERE name = N'" + $user.name + "')
        DROP USER [" + $user.name + "]
        GO"
    invoke-sqlcmd -query $sqlcmd -serverinstance $Server
    }
}


 

In this example, I’ve excluded system databases and the SSRS and SSIS databases.  You can include/exclude any databases you want.  A version of this script has helped me with numerous server migrations, and continues to be a way to keep production instances in sync.  I’ve also used it to do nightly production refreshes to a development environment without causing problems for the users.  Give it a try, but don’t forget to add some logging (I removed mine in the example to simplify the code.).  So get going….adopt an orphan!


          SQL server AlwaysOn Availability Group data latency issue on secondary replicas and synchronous state monitoring   

Originally posted on: http://geekswithblogs.net/FrostRed/archive/2017/04/30/244536.aspx

This article explains how data synchronization works on SQL Server AlwaysOn Availability Group and some details about how to use sys.dm_hadr_database_replica_states table to check replica states.


I borrowed this daligram from this article which explains data synchronization process on SQL Server AlwaysOn Availability group.



The article has full details about the process. Things are worthing noting here are step 5 and 6.
When a transaction log is received by a secondary replica, it will be cached and then

  • 5. Log is hardened to the disk. Once the log is hardened, an acknowledgement is sent back to the primary replica.

  • 6.Redo process will write the change to the actual database


So for a Synchronous-Commit secondary replica, after step 5, the primary replica will be acknowledged to complete the transaction as “no data loss” has been confirmed on the secondary replica. This means after a transaction is completed, SQL server only guarantees the update has been written to the secondary replica’s transaction logs files rather than the actual data file. So there will be some data latency on the secondary replicas even though they are configured as “ Synchronous-Commit”. This means after you made some changes on the primary replica, if you try to read it immediately from a secondary replica, you might find your changes are there yet.




This is why in our project, we need to monitor the data synchronization states on the secondary replicas. To get the replica states, I use this query:


select  

   r.replica_server_name,

   rs.is_primary_replica IsPrimary,

   rs.last_received_lsn,

   rs.last_hardened_lsn,

   rs.last_redone_lsn,

   rs.end_of_log_lsn,

   rs.last_commit_lsn

from sys.availability_replicas r

inner join sys.dm_hadr_database_replica_states rs on r.replica_id = rs.replica_id


The fields end with “_lsn” are the last Log Sequence Number at different stages.


  • last_received_lsn: the last LSN a secondary replica has received

  • end_of_log_lsn: the last LSN a has been cached

  • last_hardened_lsn: the last LSN a has been hardened to disk

  • last_redone_lsn: the last LSN a has been redone

  • last_commit_lsn: not sure what exactly this one this. But from my test, most of the time, it equals to last_redone_lsn with very rare cases, it is little smaller than last_redone_lsn. So I guess it happens a little bit after redone.




If you run the query on the primary replica, it will returns the information for all replicas


If you run the query on a secondary replica, it will returns the data for itself.



Now we need to understand the format of those LSNs. As this article explained, a LSN is in following format:


  • the VLF (Virtual Log Files) sequence number 0x14 (20 decimal)

  • in the log block starting at offset 0x61 in the VLF (measured in units of 512 bytes)

  • the slot number 1



As the LSN values returned by the query are in decimal, we will have to break them into parts like this:


Now we can compare the LSN values to figure out some information about state of the replica. For example, we can tell how much redo process in behind the logs have been hardened on Replica-2:


last_hardened_lsn - last_redone_lsn = 5137616 - 5137608 = 8.


Note, we don’t need to include the slot number(the last 5 digits) into the calculation. In fact, most of the LSNs in dm_hadr_database_replica_states table do not have slot number. As this document says: they are not actual log sequence numbers (LSNs), rather each of these values reflects a log-block ID padded with zeroes. We can tell this by looking at the last 5 digits of a LSN number. If it always ends with 00001, it means it is not actual LSN.


As from this article we know the block offset in measured in units of 512 bytes, in the example above, the difference between last_hardened_lsn and last_redone_lsn that means there is 8 * 512 = 4096 bytes data is waiting to be written to the data file.


UPDATE: I have just observed that last_redone_lsn could be NULL when there are no replicas acting as primary in the group. It is not always and I don't know under what condition it will NULL. However last_commit_lsn seems always have a value and as I mentioned it has similar value as last_redone_lsn(maybe just slightly behind last_redone_lsn). So  last_commit_lsn might be used when last_redone_lsn is not available. 


This article has much better explanation about how the transaction logs work in Availability group.


          Redirect Standard Error to Output using PowerShell   

Originally posted on: http://geekswithblogs.net/BobHardister/archive/2017/04/25/redirect-standard-error-to-output-using-powershell.aspx

We use LiquiBase for Database change control and Octopus for deployment to downstream environments in our CICD pipeline. Unfortunately, the version of LiquiBase we use writes information messages to standard error. Octopus then interprets this as an error and marks the deployment with warnings when in fact there were no warnings or errors. Newer versions of LiquiBase may have corrected this.

This statement in the update-database function of the liquibase.psm1 file will publish information level messages as errors in the Octopus task log:

..\.liquibase\liquibase.bat --username=$username --password=$password --defaultsFile=../.liquibase/liquibase.properties --changeLogFile=$changeLogFile $url --logLevel=$logLevel update

As a work-around, you can call the statement as a separate process and redirect standard error to standard out as follows:

&  cmd /c "..\.liquibase\liquibase.bat --username=$username --password=$password --defaultsFile=../.liquibase/liquibase.properties --changeLogFile=$changeLogFile $url --logLevel=$logLevel update 2>&1" | out-default

Now the messages are published to Octopus as standard output and display appropriately.


          SQL Database is in use and cannot restore   

Originally posted on: http://geekswithblogs.net/renso/archive/2017/03/31/sql-database-is-in-use-and-cannot-restore.aspx

USE master
GO

ALTER DATABASE <mydatabase>
SET SINGLE_USER
--This rolls back all uncommitted transactions in the db.
WITH ROLLBACK IMMEDIATE
GO

RESTORE DATABASE <mydatabase> FILE = N'mydatabase' FROM  DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup\mydatabase.bak' WITH  FILE = 1,  NOUNLOAD,  REPLACE,  STATS = 10
GO


          Gold Project, E-Waste Database Support SCP   
EBRD will loan US$140 million to Bakyrchik Mining Venture LLP for development of the Kyzyl gold deposit, enhancing the use of modern technology in Kazakhstan’s mining sector and bringing the sector in line with the highest international standards. The IGF has developed a guidance document to help governments develop, implement and monitor effective management strategies for artisanal and small-scale mining. UNU's Sustainable Cycles programme, the International Solid Waste Association (ISWA) and the ITU formed a Global e-Waste Statistics Partnership to build country capacity to produce comparable, reliable e-waste statistics and track developments over time.
          Provisioning just got BIGGER   
The recent launch of Redgate SQL Clone v2 has removed the previous 2TB size limit, as the tool now supports cloning databases up to a whopping 64TB. In this post, Karis Brummit explains how the increa ... - Source: www.sqlservercentral.com
          Oracle Database Administrator / Performance Tuning Specialist - RPM Technologies - Toronto, ON   
*Job Summary* The Oracle Database Administrator / Performance Tuning Specialist is responsible for the maintenance and implementation of database changes for
From Indeed - Thu, 29 Jun 2017 16:24:19 GMT - View all Toronto, ON jobs
          DBA2 DBA - Sky System Inc - Toronto, ON   
*JOB RESPONSIBILITIES: * As a DB2 Database Administrator II, you will provide required support for business applications using DB2 databases. As part of a
From Indeed - Fri, 02 Jun 2017 20:38:07 GMT - View all Toronto, ON jobs
          Oracle DBA / Performance Specialist - RPM Technologies - Toronto, ON   
The Oracle Database Administrator / Performance Tuning Specialist is responsible for the maintenance and implementation of database changes for our
From RPM Technologies - Tue, 09 May 2017 22:27:01 GMT - View all Toronto, ON jobs
          Oracle Database Administrator - RPM Technologies - Toronto, ON   
*About RPM* RPM Technologies provides software solutions and services to the largest financial services companies in Canada. We offer product record keeping
From Indeed - Mon, 08 May 2017 20:40:57 GMT - View all Toronto, ON jobs
          System Message 6124 Bug   

I believe I have found a bug in Dynamics SL 2015 (version 9.01.30423.01). Twice we have encountered an issue where my CFO goes to process batches and gets this error:

System Message 6124: Process started: Tue Dec 08 01:22 P.M. 

Module: 'PA' 

Batch: '0000042651' 

Processing: Module: 'PA' , Batch: '0000042651' 
System Message 3011: is not in balance and not posted. 

System Message 6125: Process ended: Tue Dec 08 01:22 P.M.

What has happened is that even though the Control value in the 01.010.00 screen shows the correct number, the database value is 0. If she sets the Control value on the 01.010.00 to a different number (we used 1) and then saves, the next time she goes to the screen it has the correct Control value and the database has been updated.

Some additional detail:

The fields CrTot, CtrlTot, CuryCrTot, and CuryCtrlTot would all be 0 in the BATCH table. Once she entered in a different number and saved, these fields would then have correct values.


          unable to grant error: ExecuteNonQuery failed to database 'MASTER'   

Hello,

I'm moving the actual  SL installation to a new server,  when we  try to connect with  DB  Maintenance   the system sends  the message

unable to grant error: ExecuteNonQuery failed to database 'MASTER'

The next step  for this  installation is to migrate to  versión SL2011 FP1

thanks

Carlos Torre


          RE: unable to grant error: ExecuteNonQuery failed to database 'MASTER'   

I'm having this same issue.  Was there ever a resolution?


          RE: Inventory (issue screen): the detail line items are missing from the screen and database   

Thanks for the script.

Here is the response from the MSFT support. Just wanted to share with you.

" This looks like one of those anomaly cases where the batch exists in the BATCH table and never wrote anything to INTRAN. The batch is showing as Completed and Released in batch, all with 0 amounts. This batch record is finished as far as SL believes it to be. You can re-enter the record that was originally input and it will process as normal.

Everything that I see in this batch looks like it will be fine and shouldn’t bother anything. I believe the form post was on the right track when it said that leaving the screen open for too many transactions is likely the cause. We have seen that sporadically in various screens but have never been able to consistently reproduce it in house.

If you like, you can delete this batch from the BATCH table.

But there is also not a problem with that batch existing, either. It’s just an empty one. If you want to clear it out of the PV List for the Issues screen you can run a delete statement to take it out of the BATCH Table. Or you can leave it where it is. Either way it’s not causing any harm. "


          RE: Inventory (issue screen): the detail line items are missing from the screen and database   

Here's the script. Please remember this assumes you found out the missing data from a source outside the system. Therefore, any meaningful data really is expected as a parameter on the script.

As a disclaimer, this script was meant to solve a specific issue in my company. In fact, there's a lot of hard coded data that won't fit you.

To run, the command is something as in 

sqlcmd -S <server>-E -i add-intran.sql -v invtid=IEPCA004 batnbr=661570 line=19 q=1 ref=20170404 srclineref=00001 srcnbr=067092 srctype=POR task=Z9180 uom=PAR

/**************************************************************************************************
    recupera-intran.sql

    Recupera detalle de INTran

    Parámetros:
       $invtid
       $batnbr
       $line
       $q
       $ref
       $srclineref
       $srcnbr
       $srctype
       $task
       $uom
***************************************************************************************************/

declare @acct char(10)
declare @desc char(30)

select @desc= descr, @acct=COGSAcct from inventory where invtid='$(invtid)'

INSERT INTO [dbo].[INTran]
           ([Acct]
           ,[AcctDist]
           ,[ARDocType]
           ,[ARLineID]
           ,[ARLineRef]
           ,[BatNbr]
           ,[BMICuryID]
           ,[BMIEffDate]
           ,[BMIEstimatedCost]
           ,[BMIExtCost]
           ,[BMIMultDiv]
           ,[BMIRate]
           ,[BMIRtTp]
           ,[BMITranAmt]
           ,[BMIUnitPrice]
           ,[CmmnPct]
           ,[CnvFact]
           ,[COGSAcct]
           ,[COGSSub]
           ,[CostType]
           ,[CpnyID]
           ,[Crtd_DateTime]
           ,[Crtd_Prog]
           ,[Crtd_User]
           ,[DrCr]
           ,[EstimatedCost]
           ,[Excpt]
           ,[ExtCost]
           ,[ExtRefNbr]
           ,[FiscYr]
           ,[FlatRateLineNbr]
           ,[ID]
           ,[InsuffQty]
           ,[InvtAcct]
           ,[InvtID]
           ,[InvtMult]
           ,[InvtSub]
           ,[IRProcessed]
           ,[JrnlType]
           ,[KitID]
           ,[KitStdQty]
           ,[LayerType]
           ,[LineID]
           ,[LineNbr]
           ,[LineRef]
           ,[LotSerCntr]
           ,[LUpd_DateTime]
           ,[LUpd_Prog]
           ,[LUpd_User]
           ,[NoteID]
           ,[OrigBatNbr]
           ,[OrigJrnlType]
           ,[OrigLineRef]
           ,[OrigRefNbr]
           ,[OvrhdAmt]
           ,[OvrhdFlag]
           ,[PC_Flag]
           ,[PC_ID]
           ,[PC_Status]
           ,[PerEnt]
           ,[PerPost]
           ,[PoNbr]
           ,[PostingOption]
           ,[ProjectID]
           ,[Qty]
           ,[QtyUnCosted]
           ,[RcptDate]
           ,[RcptNbr]
           ,[ReasonCd]
           ,[RefNbr]
           ,[Retired]
           ,[Rlsed]
           ,[S4Future01]
           ,[S4Future02]
           ,[S4Future03]
           ,[S4Future04]
           ,[S4Future05]
           ,[S4Future06]
           ,[S4Future07]
           ,[S4Future08]
           ,[S4Future09]
           ,[S4Future10]
           ,[S4Future11]
           ,[S4Future12]
           ,[ServiceCallID]
           ,[ShipperCpnyID]
           ,[ShipperID]
           ,[ShipperLineRef]
           ,[ShortQty]
           ,[SiteID]
           ,[SlsperID]
           ,[SpecificCostID]
           ,[SrcDate]
           ,[SrcLineRef]
           ,[SrcNbr]
           ,[SrcType]
           ,[StdTotalQty]
           ,[Sub]
           ,[SvcContractID]
           ,[SvcLineNbr]
           ,[TaskID]
           ,[ToSiteID]
           ,[ToWhseLoc]
           ,[TranAmt]
           ,[TranDate]
           ,[TranDesc]
           ,[TranType]
           ,[UnitCost]
           ,[UnitDesc]
           ,[UnitMultDiv]
           ,[UnitPrice]
           ,[User1]
           ,[User2]
           ,[User3]
           ,[User4]
           ,[User5]
           ,[User6]
           ,[User7]
           ,[User8]
           ,[UseTranCost]
           ,[WhseLoc])
     VALUES
           (@acct -- Acct
           ,0 --<AcctDist, smallint,>
           ,'' --<ARDocType, char(2),>
           ,0 --<ARLineID, int,>
           ,''--<ARLineRef, char(5),>
           ,'$(batnbr)' --<BatNbr, char(10),>
           ,'' --<BMICuryID, char(4),>
           ,0 --<BMIEffDate, smalldatetime,>
           ,0 --<BMIEstimatedCost, float,>
           ,0 --<BMIExtCost, float,>
           ,'' --<BMIMultDiv, char(1),>
           ,0 --<BMIRate, float,>
           ,'' --<BMIRtTp, char(6),>
           ,0 --<BMITranAmt, float,>
           ,0 --<BMIUnitPrice, float,>
           ,0 --<CmmnPct, float,>
           ,1 --<CnvFact, float,>
           ,@acct --<COGSAcct, char(10),>
           ,'0000000ZZZZZZZ' --<COGSSub, char(24),>
           ,'' --<CostType, char(8),>
           ,'MPCSA' --<CpnyID, char(10),>
           ,getdate() --<Crtd_DateTime, smalldatetime,>
           ,'10020' --<Crtd_Prog, char(8),>
           ,'ESTRELOW' --<Crtd_User, char(10),>
           ,'C' --<DrCr, char(1),>
           ,0 --<EstimatedCost, float,>
           ,0 --<Excpt, smallint,>
           ,0 --<ExtCost, float,>
           ,'' --<ExtRefNbr, char(15),>
           ,'2017' --<FiscYr, char(4),>
           ,0 --<FlatRateLineNbr, smallint,>
           ,'' --<ID, char(15),>
           ,0 --<InsuffQty, smallint,>
           ,'110401' --<InvtAcct, char(10),>
           ,'$(invtid)' --<InvtID, char(30),>
           ,-1 --<InvtMult, smallint,>
           ,'1CT0009ZZZZZZZ' --<InvtSub, char(24),>
           ,0 --<IRProcessed, smallint,>
           ,'IN' --<JrnlType, char(3),>
           ,'' --<KitID, char(30),>
           ,0 --<KitStdQty, float,>
           ,'S' --<LayerType, char(1),>
           ,$(line) --<LineID, int,>
           ,-32768+($(line)-1)*256 --<LineNbr, smallint,>
           ,REPLACE(STR($(line),5),' ','0') --<LineRef, char(5),>
           ,0 --<LotSerCntr, smallint,>
           ,getdate() --<LUpd_DateTime, smalldatetime,>
           ,'10020' --<LUpd_Prog, char(8),>
           ,'ESTRELOW' --<LUpd_User, char(10),>
           ,0 --<NoteID, int,>
           ,'' --<OrigBatNbr, char(10),>
           ,'' --<OrigJrnlType, char(3),>
           ,'' --<OrigLineRef, char(5),>
           ,'' --<OrigRefNbr, char(10),>
           ,0 --<OvrhdAmt, float,>
           ,0 --<OvrhdFlag, smallint,>
           ,'Y' --<PC_Flag, char(1),>
           ,'' --<PC_ID, char(20),>
           ,1 --<PC_Status, char(1),>
           ,'201704' --<PerEnt, char(6),>
           ,'201704' --<PerPost, char(6),>
           ,'' --<PoNbr, char(10),>
           ,'' --<PostingOption, smallint,>
           ,'CT0009' --<ProjectID, char(16),>
           ,$(q) --<Qty, float,>
           ,0 --<QtyUnCosted, float,>
           ,0 --<RcptDate, smalldatetime,>
           ,'' --<RcptNbr, char(15),>
           ,left('$(invtid)',3) --<ReasonCd, char(6),>
           ,'$(ref)' --<RefNbr, char(15),>
           ,0 --<Retired, smallint,>
           ,0 --<Rlsed, smallint,>
           ,'' --<S4Future01, char(30),>
           ,'' --<S4Future02, char(30),>
           ,0 --<S4Future03, float,>
           ,0 --<S4Future04, float,>
           ,0 --<S4Future05, float,>
           ,0 --<S4Future06, float,>
           ,0 --<S4Future07, smalldatetime,>
           ,0 --<S4Future08, smalldatetime,>
           ,0 --<S4Future09, int,>
           ,0 --<S4Future10, int,>
           ,'' --<S4Future11, char(10),>
           ,'' --<S4Future12, char(10),>
           ,'' --<ServiceCallID, char(10),>
           ,'' --<ShipperCpnyID, char(10),>
           ,'' --<ShipperID, char(15),>
           ,'' --<ShipperLineRef, char(5),>
           ,0 --<ShortQty, float,>
           ,'CT0009' --<SiteID, char(10),>
           ,'' --<SlsperID, char(10),>
           ,'' --<SpecificCostID, char(25),>
           ,0 --<SrcDate, smalldatetime,>
           ,'$(srclineref)' --<SrcLineRef, char(5),>
           ,'$(srcnbr)' --<SrcNbr, char(15),>
           ,'$(srctype)' --<SrcType, char(3),>
           ,0 -- <StdTotalQty, float,>
           ,'1CT0009ZZZZZZZ' --<Sub, char(24),>
           ,'' --<SvcContractID, char(10),>
           ,0 --<SvcLineNbr, smallint,>
           ,'$(task)' --<TaskID, char(32),>
           ,'' --<ToSiteID, char(10),>
           ,'' --<ToWhseLoc, char(10),>
           ,'' --<TranAmt, float,>
           ,'20170404' --<TranDate, smalldatetime,>
           ,@desc --<TranDesc, char(30),>
           ,'II' --<TranType, char(2),>
           ,0 --<UnitCost, float,>
           ,'$(uom)' --<UnitDesc, char(6),>
           ,'M' --<UnitMultDiv, char(1),>
           ,0 --<UnitPrice, float,>
           ,'' --<User1, char(30),>
           ,'' --<User2, char(30),>
           ,0 --<User3, float,>
           ,0 --<User4, float,>
           ,'' --<User5, char(10),>
           ,'' --<User6, char(10),>
           ,0 --<User7, smalldatetime,>
           ,0 --<User8, smalldatetime,>
           ,0 --<UseTranCost, smallint,>
           ,'01' --<WhseLoc, char(10),>
	   )
GO



          RE: Inventory (issue screen): the detail line items are missing from the screen and database   

Would you please share the script with me ? Thanks


          RE: Inventory (issue screen): the detail line items are missing from the screen and database   

Sorry to say that this happens to us from time to time.

The symptom is an existing IN batch record with zero debit and zero credit with the right date and userid stamps, but no intran nor any real transaction recorded whatsoever. It's like the user was shooting blanks.

The only additional info I have is that this happens when the same instance of the 1002000 Issue screen its being used for some time ( > 3 hours). After that time, it fails to record the details sheet. In my case, it's likely that there are more than one batch affected in a single event.

This happens about once every quarter. Haven't open a support case.

I even have a script to recover the lines, but it depends on the user having a hard copy of the transaction. So it's equivalent to start a new transaction from scratch.


          Exportizer 6.1.2   
Free database export tool
          Database Tour 8.2.4.33   
Cross-database tool
          Thinkphp5连接sqlite数据库配置   
database.php文件中:

// 数据库类型
'type'           => 'sqlite',
// 连接dsn
'dsn'            => 'sqlite:'.APP_PATH.'tph2/database.db',

 
          Sr. Database Administrator for Georgia Department of Public Health   
nLeague Services Atlanta, GA
          dbForge Fusion for Oracle 3.8   
Powerful Visual Studio plugin for efficient Oracle database development process
          Quick Heal Virus Database 17.00(30 June 2   
Offers you the latest virus definitions you can use to manually update
          Dbvisit Replicate 2.9.00   
Create a duplicate for your Oracle database with this program
          Data Visualization   
There has been a shift in how data is consumed in the last 5 to 8 years. Appropriately so, I believe the consuming data in context and particularly the proper use of process based portals has changed how the typical end-user believes they should be able to find information. Over the next 5 to 8 years I believe the contextual process based mini-portals will become the norm for consumption (some call these mashups). During that time I believe another shift will occur and that is how we view the data that is now gathered in one place. Data Visualization is a term I take a little bit of leeway with. People usually stove pipe data visualization but to me it can mean any image or model that is driven or associated with data. By the way, I like to remind people that this is not a new thing either. The GIS community has been doing this for a decade. The spreadsheet folks have been doing this for longer than that with its charting and mapping capabilities. We were even doing this in Lotus Notes back in the early to mid 90s with hotspot images front-ending a database application.

The reason that I think data visualization is that next wave is for two reasons. The first reason is that for complex processes it is the simplest means for consuming mashed up data. Take a product like OpenView or Tivoli. No one really wants to wade through all of the different log files, patch information, system components, etc… they just want to look at a network topology map and see if the box/line/etc… is green or red. In these types of instances, there is data overload when simply monitoring the exception and quickly understanding what that exception means in context is the most important task. The second reason is that we are getting a huge wave into the workforce of users in the US that are gamers. Gamers are use to data visualization. Go play WoW, EQ2, Half Life, Vanguard, CoH, etc… and you soon get an idea of the vast amount of data that is contained in a world of nothing but visualization. Spend some time in Sims online or Second Life and you see the same model alive and well in a virtual but not game specific community. These users will demand data visualization because they understand the power of it. Being a gamer for so long, I might be biased on this but personally I am deeply excited.

For me I see data visualization as the end to the document/record model we are so trapped in today and instead data could take any form. Think of it like nature in that data is the DNA that makes up the final organism (visualization). The visualization layers will become just as dynamic and will be able to take form based on the smallest component’s sequencing, function and placement. If you want to see someone that gets it, go check out www.bridgeborn.com Their Bridgeworks Engine/Platform shows just a piece of what they are capable of and just a pinhole for where this area is going but man what a sight and hope through that pinhole!!
          Fire, Ready, Aim!   
I always loved that saying. It was one that was sunk into me by a colleague Barbara Baird of Lotus Organizer fame. No idea what she is doing now but man she had her finger on the pulse of how people really were and how they really acted.

I was recently talking to a Lotus employee about a bunch of stuff and was asked "why don't customers get the power of custom applications". My answer "don't forget people Act then Think versus Think then Act". It is not that customers don't get the power of custom applications it is that they don't think about it first anymore. The more I get out there really working with customers the more I see how true this is.

Customers say, "I want to be in better touch with my customers and partners so I need a CRM package". You have to read into that statement alone. What automatically brings them to conclude the fact that they need a CRM package? Well, someone told them. Next they say well I need to look at SAP or Oracle or Siebel or Saleforce.com (sorry MS Dynamics is not on the tip of the tongue yet). Why? Again because they have been influenced to think that these COTS (consumer off the shelf) based products are the best. So then what really happens. They buy one of these and end up spending thousands of dollars customizing it to have this report or that report or capture this data or that data blah blah blah....you get the picture.

In addition, custom applications have gotten a bad stigma over the past 3 or 4 years (especially in the public sector). CIOs and IT managers are afraid to build a best of breed solution for the most part because they are afraid if it doesn't work then they will get canned. Custom applications have always had the hard issue also of "now who will support it" or "what happens when one component gens itself and breaks the solution". Very few people anymore, sit there and think, "well I need the best workflow engine, the best database and indexing capabilities, the best search performance, the best data capture capabilities...etc...".

The sad truth though is that everyone needs custom applications. Software is about business and in the true sense of form follows function, not all businesses are ran the same. Do you really think Jeff Bezos used a COTS product to manage his shipping without at least customizing it to fit a unique on-line shopping model? To compete we have to be unique. To be unique you need systems that can adapt and react at business speed now.

This war around custom vs COTS is enabled by the advancement in the openness and robustness of the engines today but as always it will be won by the applications. How good is a motor without a car or boat to power? A good friend of mine, Peter O'Kelly, use to say "it's the apps stupid" all the way back in 1993 when we first started working on Lotus Notes together. How right he was. Luckily he didn't bet me $.07 then.

You want to win the war? You have to fight the battle on multiple fronts. Your platform/engine message needs to be targeted at the ISVs not the customers. You have to get them to realize you have some very unique pieces that could allow them to build better COTS products. You then have to help them take that message to the consumer that there are COTS products available on the platform. Either that or get into the COTS business yourself (which I would recommend at least for a certain amount of applications). Then in a high touch sales model convince them that your COTS is good but it is the ease of customization that makes the Solution powerful!
          Tintri rises nearly 4% as another technology company goes public   
Tintri provides an enterprise cloud platform that helps companies manage their storage and databases.
          Exportizer 6.1.2   
Free database export tool
          Spark Summit: SequoiaDB x Spark Architecture Scales Operational Data Lake in China   
SAN FRANCISCO, June 30, 2017 /PRNewswire/ -- From June 5 to 7, Spark Summit SF 2017 came to meet the Silicon Valley again. SequoiaDB, the top distributed database vendor in China and also one of the 14 certified Spark distributors over the world, was invited to the summit and share a talk...
          8tracks Prompts Password Reset After Hack   

Internet radio service 8tracks this week informed users of a database hack, prompting them to reset their passwords to prevent account compromise.
read more


          Using Firebase/Webservice with .Net on Android and Swift   

I'm trying to figure out the best/easiest way to approach the project I want to take on. It will have a website built in .Net Core and I will be building an Android and Swift app as well sharing the same database. I researched this and found that I need to create a webservice to share the data between the website, android app, and IOS app.

I keep seeing Firebase come up. Would Firebase make this easier for me or do I need to stick with a .net webservice. I need to authenticate with the aspnetusers table so I don't know how Firebase would come into play with this, but I read it's easier to work with. I just don't know if it can work with .net better than .net webservice.

Can anyone provide me with advice with the easiest approach to share my .net websites data with mobile platforms?


          MRI Image Processor   
MA-Marlborough, Job Summary: The MRI Image Processor will complete pre-specified image processing tasks on MRI images and record findings in a database. Key Responsibilities: 1) Loading images to the workstation from a USB drive 2) Checking the images for abnormalities and recording findings 3) Initiating scripted processing actions in sequence 4) Logging all actions performed in a record. 5) Up to 6 co-located w
          magento 2 review by maldiveboy   
i want to locate magento 2 review table from database (Budget: $10 - $30 USD, Jobs: Magento)
          Build a Website by amiragomaa   
Hi there, I got a website build in dreamweaver which needs to be updated. I would need to add on a flight & hotel booking system, a live chat & some parallax scrawling on the landing page. The database I have is SQL (XML links) Anyone able to help? Please reply with pricing... (Budget: £20 - £250 GBP, Jobs: Adobe Dreamweaver, Website Design)
          How to Recover Your Drupal Website From Registry Errors   
How to Recover Your Drupal Website From Registry Errors
drupal-registry-rebuild
David Csonka Fri, 02/17/2017 - 06:54

In a previous blog post we discussed ways in which you can clear your Drupal website's cache when it is having hard crashes or the infamous "white screen of death" (WSOD). Being able to clear the Drupal cache is important because it can resolve a lot of various issues that may crop up, and is usually the first thing to do when something isn't working right. 

A specific case where clearing the cache may be helpful, but you might not be able to is if the Drupal registry (which records which PHP classes are being sourced from which files) may have outdated information preventing Drupal from bootstrapping even high enough to allow clearing of the cache.

You might see an error like the following:

PHP Fatal error:  Class 'EntityAPIControllerExportable' not found in ...sites/all/modules/rules/includes/rules.core.inc on line 11

In this case, rebuilding the registry is what needs to occur since manually clearing out database cache tables may not resolve the issue. One reason why this type thing might occur could be during a manual migration process and modules get moved around or removed. Fortunately, there is a very helpful Drush plugin called "Registry Rebuild" which can help resolve this problem. Keep in mind, that while you download this plugin, usually through Drush, it is not a Drupal module, it is a Drush plugin. This means you don't need to access your Drupal website to utilize this tool, you can just run it from the command line like any other Drush command.

The first step is to download the plugin into Drush, you can do this with the command "drush dl registry_rebuild". After that, you'll likely need to be able to clear Drush's internal cache with the command "drush cc drush". Now, you can run the registry rebuilding process for the broken Drupal website. The command you use will be "drush rr" and will need to be done within the Drupal file area on your host as you would expect, or using drush aliases.

After running the registry rebuild, if it proceeds normally you will see a response like the following:

The registry has been rebuilt via registry_rebuild (A).              [success]
All caches have been cleared with drush_registry_rebuild_cc_all.     [success]
The registry has been rebuilt via drush_registry_rebuild_cc_all (B). [success]
All caches have been cleared with drush_registry_rebuild_cc_all.     [success]
All registry rebuilds have been completed.

With luck, your Drupal website's registry issues will be resolved, with the registry table now reflecting the existing locations of files needed for required Drupal PHP classes. If this fails, an alternative may be to try and add back any modules or files in your Drupal website that might have been moved or removed. The last resort may be to restore to a backup, but the Drush Rebuild Registry plugin is great and should fix the problem for you.

Still having trouble getting your Drupal website back up and running?

We might be able to help you! Contact Us


          How to Fix Drupal Failed to Get Available Update Data Errors   
How to Fix Drupal Failed to Get Available Update Data Errors
drupal-module-updates
David Csonka Mon, 02/13/2017 - 04:38

There is a rare but altogether annoying issue that may present itself in your Drupal website from time to time, which while having several different potential causes can be usually resolved in a straightforward manner. The "Failed to Get Available Update Data" error is a state when you go to the available updates report in your Drupal website interface in order to see which modules have updates available, and your Drupal website is unable to look up the status. It will look something like the image below:

drupal-module-updates

One of the more complicated (but less likely) explanations for this issue is if your server or the website cannot resolve to itself. This might happen if the web server is behind NAT, and the provider does not support full split DNS functionality.

If you are using Drush to retrieve updates, then a simple cause and solution would be to check if Drush itself needs to be updated. You can review the most common ways to update Drush (depending on your version) here: https://www.drupal.org/node/901828

However, by far the most common reason for this failed update status issue is if the "cache_updated" table needs to be cleared. This database table is a storage location for cached information about updates, so manually clearing it (if normal Drupal cache clearing does not do the trick) should fix the problem. To do this manually would involve accessing your database controls (perhaps with a tool like PHPMyAdmin) and using the "truncate" feature on the specified table. Note: only do this if you are comfortable working with databases, as accidentally deleting or "dropping" a necessary Drupal database table can cause your website to behave strangely or crash completely.

Not comfortable making database changes to your Drupal website manually?

We might be able to help you! Contact Us


          How to Clear Your Drupal Cache When Your Website WSOD or Errors   
How to Clear Your Drupal Cache When Your Website WSOD or Errors
drupal-support
David Csonka Fri, 02/10/2017 - 18:02

The Drupal "White Screen of Death" or WSOD for short, would be a hilariously named aspect of Drupal if it wasn't actually an incredibly frustrating part of developing with this CMS. Unfortunately it is something that you can encounter when deploying updates for Drupal modules or when developing a new module of your own.

Definition from Drupal.org:

Occasionally a site user or developer will navigate to a page and suddenly the page content disappears, and it becomes blank. No content. No errors. Nothing. This happens sometimes, It could happen after updating a module, theme, or Drupal core. This is what is referred to by most members of the Drupal community as the White Screen of Death or WSOD. There are several reasons why this might occur, and therefore several possible solutions to the issue.

There are various steps to help you determine the cause of the WSOD such as enabling more robust error reporting, etc. but sometimes a WSOD can make it so your Drupal website is completely locked down. For example, if the error that is causing the Drupal white screen of death originated from a custom module you were developing, and the Drupal code that is problematic is cached, you may not be able to easily clear the cache in order to propagate the fix for the error. trying to use the Drush command "drush cc all" (for clear cache all) or going to the Performance settings page to manually clear the cache, will both likely fail.

Manually Empty Database Tables

If you are familiar with working with MySQL or even using tools like PHPMyAdmin, one step you can take is to go into the database and manually empty or "truncate" the tables labeled as "cache" for the Drupal website's database. This is more of an advanced technique, and is not really recommended unless you know what you're doing or you have no other alternative. At the very least, be sure to make an update of your Drupal website database before proceeding.

Drush SQL Commands

If you can't get around manually truncating those cache tables in your database, but don't want to go into MySQL directly and are concerned about potential human error while messing around with the database tables, or simply don't have permissions for direct access to the Drupal database, another alternative is to use the Drush command to truncate the table "cache".

Summary of Options

(If clearing the cache normally with Drush or the UI isn't possible)

  • Empty or truncate the cache tables in the Drupal database through MySQL or PHPMyAdmin
  • Empty or truncate the cache tables in the Drupal database with a Drush command

Using these techniques, you should be able to get your Drupal website to respond again after resolving the error that was causing the "white screen of death" in the first place. And if you are lucky, sometimes just clearing the cache is enough to fix the problem anyway. Just be sure to test your Drupal updates so that when these kinds of problems happen, that occur in a testing environment instead of your live website!

Still having trouble getting your Drupal website back up and running?

We might be able to help you! Contact Us


          Using MongoDB With Drupal   
Using MongoDB With Drupal
mongodb
David Csonka Fri, 02/03/2017 - 04:04

Because of the database abstraction layer that was added in Drupal 7, it is fairly convenient to use a variety of database servers for the backend of your Drupal software. While the term "database abstraction layer" does sound rather sophisticated, and the code involved is certainly not insignificant, in layman's terms what this system does is provide a way for a Drupal developer and Drupal modules to work with its database without generally having to be concerned with what type of database it is.

Generally speaking though, this works very well with relational model databases, such as MySQL. These types of databases are composed of various tables which are connected by relationships of keys. The relational model of databases is a very successful one and has been studied and improved for decades now. Schemas and relational integrity are important features of this model that makes it useful for content management systems.

There are other types of database models though, most having been around just as long. NoSQL is a popular classification that is often used to refer to non-relational database types, and MongoDB is a somewhat newer database system built around document collections that fit into this category.

Not storing data in tables with rows and columns, MongoDB keeps it in documents that have a JSON-like format. As well, these documents aren't bound by a strict universal schema, so your data can easily change over time without requiring retroactive edits to older documents. Some of the key qualities that have attracted users to MongoDB are its built-in performance enhancing features, such as high availability with replica sets and load-balancing with horizontal sharding.

That is quite obviously a very cursory review of the technical aspects of MongoDB, but you can read in more detail about it on their main website.

While document-based databases are not new, the release of MongoDB several years ago created quite a stir and made developers very interested in finding uses for it in their applications, usually to take advantage of its vaunted performance qualities.

Can you use MongoDB with Drupal?

The short answer is "yes", sort of. Drupal 7 saw the release of the MongoDB module. An important thing to realize though is that this integration does not allow for completely switching to using MongoDB as the database for your Drupal installation. Despite the utility of the Drupal database API we previously mentioned, there are still aspects of how a content management system like Drupal works that don't lend themselves well to the document storage nature of MongoDB. For Drupal 7 a significant number of components of Drupal can still be stored by MongoDB, and for Drupal 8 possibly, even more, when the work on the module is completed.

See the table on the module project page to review which Drupal features can be converted to use MongoDB.

So, will you see performance boosts to your Drupal website by just integrating MongoDB to store various components, like entities or blocks? It is possible to gain a small performance increase, but this is not guaranteed, is almost assuredly dependent on the nature of your website and its content.

A document storage database like MongoDB is much better suited at server lots of "reads" very quickly and allows for scaling to multiple servers very easily. So, if you have a large website that servers an enormous amount of content to be read (and not updated) by users, it might be advantageous to use a solution like MongoDB.

However, if you have a lot of interactive content with editing and updating, so writes to the database, then MongoDB may not offer any improvements and actually may cause problems with duplication if not properly managed.

The important thing to realize here is that many popular technologies are not automatically a good solution simply because they are being talked about and used by well-known tech luminaries. Most tools have a use-case that matches their features, and MongoDB is no different. Be sure to learn more about this database system before determining if it will be a useful addition to your project.


          Redis的三种启动方式   
第一种:直接启动
安装:
tar zxvf redis-2.8.9.tar.gz
cd redis-2.8.9
#直接make 编译
make
#可使用root用户执行`make install`,将可执行文件拷贝到/usr/local/bin目录下。这样就可以直接敲名字运行程序了。
make install
启动:
#加上`&`号使redis以后台程序方式运行
./redis-server &
检测:
#检测后台进程是否存在
ps -ef |grep redis
#检测6379端口是否在监听
netstat -lntp | grep 6379
#使用`redis-cli`客户端检测连接是否正常
./redis-cli
127.0.0.1:6379> keys *
(empty list or set)
127.0.0.1:6379> set key "hello world"
OK
127.0.0.1:6379> get key
"hello world"

停止:
#使用客户端
redis-cli shutdown
#因为Redis可以妥善处理SIGTERM信号,所以直接kill -9也是可以的
kill -9 PID


第二种:通过指定配置文件启动

配置文件
可为redis服务启动指定配置文件,配置文件 redis.conf 在Redis根目录下。
#修改daemonize为yes,即默认以后台程序方式运行(还记得前面手动使用&号强制后台运行吗)。
daemonize no
#可修改默认监听端口
port 6379
#修改生成默认日志文件位置
logfile "/home/futeng/logs/redis.log"
#配置持久化文件存放位置
dir /home/futeng/data/redisData

启动时指定配置文件
redis-server ./redis.conf
#如果更改了端口,使用`redis-cli`客户端连接时,也需要指定端口,例如:
redis-cli -p 6380
其他启停同 直接启动 方式。配置文件是非常重要的配置工具,随着使用的逐渐深入将显得尤为重要,推荐在一开始就使用配置文件。



第三种:
使用Redis启动脚本设置开机自启动
启动脚本
推荐在生产环境中使用启动脚本方式启动redis服务。启动脚本 redis_init_script 位于位于Redis的 /utils/ 目录下。

#大致浏览下该启动脚本,发现redis习惯性用监听的端口名作为配置文件等命名,我们后面也遵循这个约定。
#redis服务器监听的端口
REDISPORT=6379
#服务端所处位置,在make install后默认存放与`/usr/local/bin/redis-server`,如果未make install则需要修改该路径,下同。
EXEC=/usr/local/bin/redis-server
#客户端位置
CLIEXEC=/usr/local/bin/redis-cli
#Redis的PID文件位置
PIDFILE=/var/run/redis_${REDISPORT}.pid
#配置文件位置,需要修改
CONF="/etc/redis/${REDISPORT}.conf"

配置环境
1. 根据启动脚本要求,将修改好的配置文件以端口为名复制一份到指定目录。需使用root用户。
mkdir /etc/redis
cp redis.conf /etc/redis/6379.conf
 2. 将启动脚本复制到/etc/init.d目录下,本例将启动脚本命名为redisd(通常都以d结尾表示是后台自启动服务)。
cp redis_init_script /etc/init.d/redisd

 3.  设置为开机自启动
此处直接配置开启自启动 chkconfig redisd on 将报错误: service redisd does not support chkconfig 
参照 此篇文章 ,在启动脚本开头添加如下两行注释以修改其运行级别:
#!/bin/sh
# chkconfig:   2345 90 10
# description:  Redis is a persistent key-value database
#
 再设置即可成功。


#设置为开机自启动服务器
chkconfig redisd on
#打开服务
service redisd start
#关闭服务
service redisd stop


http://www.tuicool.com/articles/aQbQ3u









abin 2015-09-10 21:02 发表评论

          my oracle database load is particularly high   
Dear Mr Connor, my oracle database load is particularly high,and I don't connect it ,finally application developer reboot their application server to solve it! I export an AWR report for failure diagnosis in peroid time, <code> WORKLOAD REPOSIT...
          how to disable auto login wallet   
Hi, In a RAC database, I have a open keystore (wallet) with AUTOLOGIN. I would like to disable the AUTO LOGIN, but I am not sure of the correct procedure for that. select * from gv$encryption_wallet; WRL_TYPE WRL_PARAMETER ...
          Consensus Big Board Rankings 6-07-2010   
I finally got around to updating the latest Consensus Big Board rankings. I had been waiting for nbadraft.net and nba-draft.com to update their rankings; and since they both recently did... well... I could do my thing.

For those who are unaware, I take the Big Board rankings of the four major sites I look at for Big Boards (DX, nbadraft.net, nba-draft.com, and ESPN), then add in my own, and the average rankings make up the Consensus Big Board. This consensus gives everyone a better idea of where players may rank at the current point in time, and the database allows for all of us to see how a prospect's stock has fallen or risen as the days pass. The last update I did was May 2nd, and as you can see here (a link is also under the "Useful Files" tab on the sidebar), a lot of things have changed since then. The brief analysis of these rankings will be discussed below the fold.

There's usually a small group of prospects whose stocks make a significant increase during this time of the year, while many others see minor decreases in their stock as a result. Last year, Jonny Flynn seemed to be the main prospect moving up. This year? It seems like Luke Babbitt is the main benefactor, while Avery Bradley and Paul George also are reaping the benefits.

The reasons? Measurements are normally the first step in a guy having his stock raised. Babbitt impressed with his athletic measurements, and his performance in the shooting drills also helped his cause. Both he and George played in the relatively obscure WAC, so those with relatively little knowledge of them came away moving them up their boards as a result. Bradley, even though he played for a major school, still benefited because he measured out taller and longer than expected, while also performing as expected in the drills.

Individual or group workouts also play a big role in people moving up this time of year. The reports coming out of San Antonio and Chicago were big boosts to George's stock, as he supposedly wowed NBA personnel at both workouts when going against another great prospect, Xavier Henry. Bradley has also done the same against a group in Indiana, while Babbitt did the same in Milwaukee.

Finally, how a player interviews can make lasting impressions on teams and reporters alike. Babbitt and George are both seen as good interviewers. George recently had a piece written about him by DX, and also had a video of his workout and interview posted.

If you have done your homework, these results should not sway your rankings much... but in some cases you may have just not seen enough of a player so these things help you paint a picture of a player - for better or worse as these may not be indicative of how the player plays on the court nor give a good representation of a players athleticism or what a player truly thinks as they have been coached through these processes by their agents.

Stay tuned to see how these rankings continue to evolve up until draft night hits.
          Devin Ebanks Writeup   
Devin Ebanks - So. SF/PF, West Virginia

Per36min numbers against Quality Opponents (27 games):
12.2 pts, 8.6 rebs, 2.7 asts, 1.1 stls, 0.5 blks, 1.7 PFs, 2.3 TOs
47.2 FG%, 15.8 3pt% (3/19), 76.2 FT%, .55 FTA/FGA

Devin Ebanks may be the consummate role player at the SF spot. He plays with energy on both ends, he crashes the boards, he is unselfish and moves the ball well, he is an outstanding man defender and great team defender, he can score efficiently from within 15 feet, and he can knock his FTs down.

At 6’8, 210lbs Ebanks is not a physically imposing player. However, even though he only has above average length and athleticism in NBA terms, he is a menace on the defensive end and can play in the paint as well as well as check the opposing team’s most talented offensive player. While he doesn’t posses great straight-line speed with the ball in his hands, he is fast moving without the ball. Even though he only has mediocre leaping ability, he is efficient finishing inside and does a great job challenging shots. Even though his lateral quickness isn’t elite, he understands positioning defensively to cut off lanes and stay in front of his man. Honestly, his biggest advantage athletically over his SF competition may be that he never appears to tire, especially on the defensive end.

Offensively, Ebanks has learned to play within himself and because of that his efficiency has risen. Some may point to the season-long numbers and claim they haven’t improved from 2008-09, however his last 15 or 20 games he stopped taking the long jumpers which are currently his bane, and stopped forcing the issue. This has led to a substantial increase in his efficiency to where he’s rose from the bottom of the SF pack in terms of offensive efficiency to the middle at 0.91 PPP - impressive considering he doesn’t currently have a reliable 3pt shot in his arsenal.

A common misconception about Devin is that he can’t shoot. While he has the worst 3pt% out of all the players in my database who attempt more than 0.5 3’s a game, he has one of the more effective midrange shots in the current SF crop - something I expect to be backed up when DraftExpress does their annual SynergySports write-ups. Nevertheless, even with the good midrange shot, his lack of a 3pt shot is his biggest detraction as a prospect and it’s something which I had hoped to see him improve upon from his freshman season. Extending the range on his jumper needs to be his primary objective, as because he is only reliable out to 15 feet, the value of his other offensive skills lessens as neither the ability to space the floor nor the threat of a jumper when he’s on the perimeter are there.

However, even with the struggles with his range, those other offensive skills are what make him a good role player prospect offensively. I’ll first touch on his passing, court vision, and decision-making. Some may point to the lack of awareness of the dwindling game clock against Villanova, or the 8 TOs against Washington and claim I have no clue what I’m talking about. Yes, those are two recent events which are blemishes, however, if you take a look at his season as a whole, and then realize that every player has made mistakes which are egregious this season; you realize they aren’t the norm for Devin. He boasts the 3rd best A:TO ratio among Forwards while netting the 4th most assists, and has, along with teammate Da’Sean Butler, spent time substantial playing point forward for WVU. When he gets the ball on the wing, he looks to create for teammates first and himself second. While he’s not a flashy passer like Hobson, he makes the simple passes and sees open teammates much better than your normal SF prospect.

Another offensive strength is his motion within the offense which can probably attributed to the type of offense Huggins likes to play. Nonetheless, Ebanks will be seen cutting to the rim, setting or coming off screens, or going into the high-post to help break a zone or draw in defenders so that teammates can get open perimeter shots. Whether it comes naturally to Ebanks, or it was ingrained into his style by Huggins, the fact that he is rarely ever stationary in the corner waiting for a pass for a full possession is going to be a nice asset for some NBA team - and it’s something that a lot of players have trouble with when the ball is not in their hands.

Like most prospects, he has his weaknesses. Most of his stem from being a mediocre at best ball handler who picks up his dribble far too easily when pressured by the defense. This limits his effectiveness driving the ball from the hoop, and also his ability to give his team another quality option bringing the ball up the floor. As far as his point forward skills are concerned, to make the transition to the NBA level his point forward abilities that he showed at WVU, he’ll need to work on his handle with both hands and become quicker with the ball.

Another weakness is the refinement in his post-up game, mainly due to his lack of strength. When he gets the ball in the post, whether from a post-entry pass or from an offensive rebound, he shows ability to convert when he has position; however when he’s left to create something for himself, he’ll frequently rely on a step back fadeaway or attempt to spin around his opponent as he doesn’t seem to have reliable touch on a hook shot or an up and under. I don’t know if this is something which needs immediate attention because he appears to be more of a mid-range guy, but it’s a weakness nonetheless.

Moving away from scoring and playmaking skills, Ebanks is one of the better rebounders. He’s in that second tier of offensive rebounders at the SF spot behind Aminu, though defensively he’s not quite as strong as he spends a lot of his time guarding the other team’s top perimeter threat or being the point man when WVU goes zone. However, he shows good fundamentals boxing his man out even though he is relatively light for a SF, and multiple times a game he will just outwork or outsmart the opposing team to get a rebound. This hustling will translate well to the NBA level because of his height and deceptive agility, but the fundamentals may not, as he will certainly need to add strength in order to continue to box out players like he does.

By far his biggest asset as a prospect is his defensive ability. I’ll start off with his man defense, where Ebanks has guarded, and done a significantly better job than his teammates in doing so, a variety of top college players such as Scottie Reynolds, Evan Turner, and even bigs like Greg Monroe. Something ESPN posted shortly after the Villanova game was that on 37 possessions Ebanks guarded Reynolds, he had 4 points, 1-5 FG, and 4 TOs; whereas on the other 27 possessions WVU players guarded Reynolds, he had 17 points, 4-5 FG, and 0 TOs. This was a similar situation to when Ebanks was put onto Turner in the second half of the OSU game and he proceeded to go quiet after having a monster first half against the WVU zone.

Simply put, he doesn’t let his lack of elite physical tools get in the way of being an elite defender. He actively uses his arms to both shut off potential passing lanes and also help funnel his man away from where he wants to go and into the part of the WVU defense which can help. He doesn’t go for blocks, but rather is an effective defender and harasses them into lower percentage shots by getting a hand up near their face and sticking close by them to not give them any room - without fouling might I add. Those who say Ebanks has not made any progress this season from last will point to the steal and block numbers to say he hasn’t made improvements defensively, but as I have explained, this is far from the case.

When it comes to team defense, the responsibility Huggins routinely gives Ebanks by putting him at the point of their 1-3-1 zone to help disrupt the opposing offenses and help trap ball-handlers in the corner. This can also be seen in the intelligent rotations he makes off his man assignments to give help in the post, or in the way he jumps passing lanes to create fast-break opportunities for himself and his teammates. Pretty much, Ebanks is the total package defensively, and will be able to spend time guarding the 1-4 spots in the pros.

Overall, while it may have been a disappointment in terms of his scoring output this season, Ebanks has been simply amazing in how effective he’s been defensively. Taking a backseat to Da’Sean Butler may not have been a bad thing for his development either, as he likely better understands what his future role will be, and may have lost the rumored attitude problems which had been there early in his college career. So while some may be wary of taking a prospect who may never be more than a 4th option offensively in the NBA, I believe his defensive ability, role-player skills, and intangibles make him a good prospect in the late-lottery or mid-first and currently have him slotted 11th on my board.
          In -House Refrigeration Tech - Warranty Support   
PA-Souderton, Office position for in-house tech support helping Migali techs in the field and endusers trouble shoot and fix equipment. We will train on Migali equipment. Job Purpose: Maintains warranty services by analyzing, and fulfilling or rejecting warranty applications; maintaining records and databases. Duties: * Prepares warranties to be processed by gathering, sorting, organizing, and recording data, i
          Software Quality Assurance Tester - WITS Consulting - Case, ON   
 Backend database testing in Microsoft SQL environment. Responsibilities and Job Duties:....
From WITS Consulting - Fri, 09 Jun 2017 00:35:19 GMT - View all Case, ON jobs
          Technical DBA Team Manager (ITIL, Oracle, Linux, Manager) / HM Revenue and Customs / Newcastle, Staffordshire, United Kingdom   
HM Revenue and Customs/Newcastle, Staffordshire, United Kingdom

Technical DBA Team Manager (Oracle, SQL, Manager)

With 60000+ staff and 50m customers HMRC is one of the biggest organisations in the UK, running the largest digital operation in Government and one of the biggest IT estates in Europe. We have six modern, state-of-the-art digital delivery centres where multiple cross functional agile teams thrive in one of the most dynamic and innovative environments in the UK. We are expanding our CBP Delivery Group and are recruiting into a number of posts within the Revenue & Customs Digital Technology Service in Newcastle.

About the Technical DBA Team Manager (ITIL, Oracle, Linux, Manager) role

This is a hands on technical management role responsible for the availability and performance of production databases within the agreed KPI's and ITSLA.

You will be managing a team of DBA's supporting both critical production databases for a high profile HMRC service as well as engaging in the end to end

project delivery life-cycle.

You will help ensure the effective operations of database platforms, and proper integration with dependent services through effective staffing, monitoring, metrics, and operational excellence.

You must possess strong leadership, be detail-oriented, a quick decision maker, and have a passion for getting things right.

You will excel at managing multiple projects and tasks, and cross-functional communication within internal Delivery Groups and external suppliers in addition to managing teams during high pressure problem resolution.

You will possess strong written and verbal communication skills and be comfortable handling internal stakeholders and external vendor communications.

The ideal candidate will have experience supporting large-scale, massively concurrent, highly available database systems.

You will lead and performance manage a new team of talented and dedicated DBA's focusing on the health of the database tier through the complete system lifecycle.

You will support teams through scheduled maintenance and release deployment activities after hours.

You will share domain and technical expertise, providing technical mentorship and support the development of a virtual team community in database administration.

Your experience with Oracle RDBMS will be critical to your success; however, you should be prepared and knowledgeable and willing to innovate to explore new technology offerings that will help HMRC to adopt any future technology platform pertinent to the systems being supported.

Other information for the Technical DBA Team Manager (ITIL, Oracle, Linux, Manager) role

Essential:

• Educated to degree level

• 7+ years of industry experience

• 4+ years of experience leading DBAs

• Relevant hands-on technical management experience of DBA support teams and skills - troubleshoot, debug, evaluate, and resolve database software defects.

• Strong technical background on DBA domain

• Excellent communication skills, written and oral communication skills;

• Well versed with the ITIL framework

• People and performance management

• Ability to take the initiative, set schedules and prioritise independently

Desirable:

• Oracle Certified Practitioner

• Management level certification

• Project management experience (involving database maintenance project planning,

capacity planning, knowledge transfer plans)

• Agile Development framework and DevOps

• Good understanding of the underpinning Oracle technology stack:

• Oracle GoldenGate

• Oracle RAC One Node

• Oracle Database 12c

• Oracle DBFS

• Oracle Data Guard

• Oracle Enterprise Linux

• Oracle Enterprise Manager

Working Pattern:

It should be noted that this role will require the successful candidate to provide support 24/7 outside of normal working hours as part of an on-call rota.

Must pass basic security checks and undertake National Security Clearance - Level 2- if security clearance at this level is not already in place

CV's should clearly demonstrate how the candidate meets the essential criteria and qualifications stated above.

The post is based in Longbenton with occasional travel/ to other HMRC and Government departments/locations and supplier offices.

To apply for the role of Technical DBA Team Manager (ITIL, Oracle, Linux, Manager), please click 'apply now'.

Employment Type: Permanent

Pay: 57,000 to 63,000 GBP (British Pound)
Pay Period: Annual
Other Pay Info: £57,000 - £63,000

Apply To Job
          Oracle DBA (Support, Oracle, Linux) / HM Revenue and Customs / Newcastle, Staffordshire, United Kingdom   
HM Revenue and Customs/Newcastle, Staffordshire, United Kingdom

Oracle Database Administrator (Support, Oracle, Linux)

With 60000+ staff and 50m customers HMRC is one of the biggest organisations in the UK, running the largest digital operation in Government and one of the biggest IT estates in Europe. We have six modern, state-of-the-art digital delivery centres where multiple cross functional agile teams thrive in one of the most dynamic and innovative environments in the UK. We are expanding our CBP Delivery Group and are recruiting into a number of posts within the Revenue & Customs Digital Technology Service in Newcastle.

About the Oracle Database Administrator (Support, Oracle, Linux) role

The database administrator will be responsible for the implementation, configuration, maintenance, and performance of critical Oracle systems to ensure the availability and consistent performance of our corporate applications.

Working as part of a team, the successful candidate will support the development and sustainment of the databases, ensuring operational readiness (security, health and performance), executing data loads, performing monitoring and support of both development and production support teams.

This is a technical role requiring solid technical skills, excellent written and interpersonal skills, the ability to work effectively both independently and within a team environment. Sharing knowledge/skills and developing productive working relationships, as well as being able to use own initiative. A flexible team player with a pro-active outlook to delivery and the rapidly changing working environment.

Responsibilities of the Oracle Database Administrator (Support, Oracle, Linux)

Manage databases through multiple product lifecycle environments, from development to mission-critical production systems to decommissioning on both virtual and physical midrange systems

Configure and maintain database servers and processes, including monitoring of system health and performance, to ensure high levels of performance, availability, and security.

Support development teams to ensure development and implementation support efforts meet integration and performance expectations.

Independently analyse, solve, and correct issues in real time, providing problem resolution end-to-end.

Refine and automate regular processes, track issues, and document changes.

Perform scheduled maintenance and support release deployment activities after hours.

Other Information about the Oracle Database Administrator (Support, Oracle, Linux) role

Essential:

2 years+ experience in database management and performance tuning and optimisation, using native monitoring, maintenance and troubleshooting tools, backup restores and recovery models on virtual machines and physical midrange systems.

A good working knowledge of Oracle Enterprise Linux operating systems running on Oracle 12c.

Experience in building virtual multi-tenant databases within on a Linux virtual platform to include database upgrade and regular patching and maintenance.

A good knowledge of Oracle GRID Infrastructure plus Oracle Enterprise Manager(OEM).

Has undertaken and can demonstrate appropriate Oracle technical training for the role.

Desirable:

• BSC degree in computer science or equivalent.

• Oracle GoldenGate

• Oracle RAC One Node

• Oracle DBFS

• Oracle Data Guard

• Netbackup

Training for the desirable criteria will be provided for the right candidate who meets the essential criteria.

Working Pattern:

This post is full time however applicants whom would like to work an alternative working pattern - are welcome to apply. All requests will be considered, although the preferred working pattern may not be available.

It should be noted that this role will require the successful candidate to provide support 24/7 outside of normal working hours as part of an on-call rota.

Must pass basic security checks and undertake National Security Clearance - Level 2- if security clearance at this level is not already in place

CV's should clearly demonstrate how the candidate meets the essential criteria and qualifications stated above.

Sift / Interviews

Applicants will be sifted based upon demonstration of the essential criteria.

The post is based in Newcastle with occasional travel/ to other HMRC and Government departments/locations and supplier offices.

To apply for the role of Oracle Database Administrator (Support, Oracle, Linux), please click 'apply now'.

Employment Type: Permanent

Pay: 37,537 to 41,488 GBP (British Pound)
Pay Period: Annual
Other Pay Info: £37,537 - £41,488

Apply To Job
          (USA-MI-Livonia) TRINITY HEALTH AT HOME: Clinical Customer Service Coordinator RN - Full Time   
**Department:** HH301_70000 Customer Service Intake **Expected Weekly Hours:** 40 **Shift:** **Position Purpose:** **Job Description Details:** Job Description **Trinity Health at Home** is a ministry organization of CHE Trinity Health, one of the largest Catholic health care systems in the nation. We are dedicated to providing "Caring Excellence" to every facet of a patient's experience in the healing of body, mind and spirit. Come be part of the Excellence. **Basic Job Function Summary:** Coordinates client referrals from all sources. Responsible for gathering necessary information from referral sources, promoting Agency services and functioning as a clinical resource and customer service representative. Receives and processes client referrals as prescribed by the physician, and/or requested by the client/family, and in compliance with the state’s Nursing Practice Act, any applicable licensure/certification requirements, and the organization’s policies and procedures. Enters referrals into computerized database and forwards information in desired format to appropriate health providers and agency contacts. Evaluates referrals for client appropriateness for home health services. Receive and document medical orders, accept for services if appropriate and fully process (or delegate) referrals in an accurate and timely manner. Communicate information to involved parties according to Intake and THHS guidelines. Obtains all payer information relevant to the referred client and assists with payer verification process. Responds to inquiries regarding homecare/hospice services. Develops and maintains positive relationships with all referral sources, internal and external customers and fellow associates. **Minimal Qualifications:** § Graduate of an approved Nursing education program. § Current Registered Nurse licensure in the state in which practicing. § Two years or more of Home healthcare experience. § Must have current Driver’s license and reliable transportation to and from work site. § Must be proficient in the use of McKesson’s Horizon Home Care Software. Will accept applicable experience in lieu of specific McKession experience. § Must be proficient in the use of computer office software.. § Ability to consistently demonstrate a commitment to the mission and Organizational Code of Ethics, and adhere to the Compliance Program. § Must possess excellent and clear oral & written communication skills § 1-3 years prior experience in a customer service, homecare intake or admissions role preferred. You may email your resume directly to Barbara.Barile@trinity-health.org **Trinity Health's Commitment to Diversity and Inclusion** Trinity Health employs more than 120,000 colleagues at dozens of hospitals and hundreds of health centers in 21 states. Because we serve diverse populations, our colleagues are trained to recognize the cultural beliefs, values, traditions, language preferences, and health practices of the communities that we serve and to apply that knowledge to produce positive health outcomes. We also recognize that each of us has a different way of thinking and perceiving our world and that these differences often lead to innovative solutions. Trinity Health's dedication to diversity includes a unified workforce (through training and education, recruitment, retention and development), commitment and accountability, communication, community partnerships, and supplier diversity. Trinity Health offers rewarding careers in a community environment with all the advantages of working at one of the nation's largest health systems. We provide high-quality, people-centered care in 22 states through our network of hospitals, facilities, community-based services, and continuing care locations - including home care, hospice, Program of All Inclusive Care for the Elderly (PACE), and senior living facilities. If you are looking for a rewarding clinical or administrative position, you'll find exceptional career possibilities, opportunities for advancement and a job with meaning at Trinity Health. Trinity Health employs more than 131,000 colleagues across 22 states. We honor and embrace a diverse representation of people, ideas and backgrounds. Our dedication to diversity is evident in our commitment to training, education, recruitment, retention and development, as well as community partnerships and supplier diversity. Because we serve diverse populations, our colleagues are trained to recognize the cultural beliefs, values, traditions, language preferences and health practices of the communities we serve and to apply that knowledge to produce positive outcomes. We recognize that each of us has a different way of thinking and perceiving our world, and that our differences not only serve to unite us, but also lead to innovative solutions.
          Devart Excel Add-in Database Pack 1.0   
Devart Excel Add-in Database Pack
          An adventure with clocks, component, and clojure.spec   

I have long parted with my initial, lacking approach to component handling in Clojure. I now rely on Stuart Sierra’s component library for this.

In this short post, I want to showcase how this library helps structure code around clear functional boundaries and allows testing without having to depend on mocking. This might induce building components for seemingly innocuous code. I will also dive into clojure.spec to show how it helps writing automated tests on top of correct generated inputs.

This article was initially written as a litterate programming org mode file, If you edit the source you can use C-c v t to generate a single file which can be used as an executable *boot* script, which means you will need to have boot installed in order to execute this.

I used *boot* here because it is easy to build a standalone executable script with it. Be sure to have BOOT_CLOJURE_VERSION set to 1.9.0-alpha14, since clojure.spec is only available from 1.9.0 onward.

To start we will add a shebang line to make sure that boot is invoked to run this script.

#!/usr/bin/env boot

For the purpose of this article, we will only use a few dependencies:

(set-env! :dependencies '[[com.stuartsierra/component "0.3.1"]
                          [org.clojure/test.check     "0.9.0"]])

For the purpose of this article, we will be building request signing functionality. Since this is a standalone *boot* project test namespaces are pulled here as well:

(ns request.signing
   (:require [com.stuartsierra.component :as component]
             [clojure.test :refer :all]
             [clojure.test.check.generators :as tgen]
             [clojure.spec :as s]
             [clojure.spec.gen :as gen]
             [clojure.spec.test :as st])
   (:import javax.crypto.Mac javax.crypto.spec.SecretKeySpec))

Our request signing functionality will work on incoming requests which look like this:

{:timestamp     1483805460         ;; UNIX Epoch of request
 :payload       "some-command"     ;; Request payload
 :authorization {:key       "..."
                 :signature "..."}}

Provided each user is given an API key, and an API secret, we can define the request signing mechanism to be:

signature = hexadecimal_string(hmac_256(secret-key, timestamp + payload))

Factoring the request timestamp in the signing mechanism provides a good protection against replay attacks: by ensuring that requests come-in within a reasonable time-delta (let’s say 500ms). To implement this a first implementation could be based on two components:

  • A *keystore* component which maps API keys to API secrets
  • A *signer* component which signs a payload

We can do away with the *keystore* component here, rely on a map, or something that behaves like a map. (If you want to investigate how to build map-like constructs, there is an article describing how to do that). I won’t describe here how to build an alternate implementation which would look-up keys in a database, but it is rather straightforward.

As far as signing is concerned, interacting with the JVM is required. To avoid pulling-in additional dependencies, we use the javax.crypto available classes:

(defn bytes->hex [bytes]
  (reduce str (map (partial format "%02x") bytes)))

(defn sign-string [secret-key payload]
  (let [key (SecretKeySpec. (.getBytes secret-key) "HmacSHA256")]
    (-> (doto (Mac/getInstance "HmacSHA256")
          (.init key))
        (.doFinal (.getBytes payload))
        (bytes->hex))))

We now have all necessary bits to write a first authorization function. Here is a first version without the addition of components for now:

(defn request-signature [keystore request]
  (when-let [secret (get keystore (get-in request [:authorization :api-key]))]
    (sign-string secret (str timestamp payload))))

(defn authorized-request? [keystore equest]
  (when-let [signature (request-signature keystore request)]
    (= (get-in request [:authorization :signature]) signature)))

This already gives us a lot of safety: a stolen secret key does not allow signing arbitrary requests as would a simple key/token validation approach, commonly found in API implementations.

One thing this authorization scheme is subject to though is replay attacks, a stolen signed payload can be replayed at will.

To limit this risk, we can rely on good wall clocks to ensure that requests are sent within a reasonable timeframe, which we can store as an option:

(def max-delta-ms 500)

We can then write our updated auhtorization function. Note how here we made authorized-request? use a Authorizer as its input. This can be safely done, since started component get their depencies provided.

(defn authorized-timestamp? [timestamp]
  (let [now (System/currentTimeMillis)]
    (<= (- timestamp max-delta-ms) now (+ timestamp max-delta-ms))))

(defn request-signature [keystore request]
  (when-let [secret (get keystore (get-in request [:authorization :api-key]))]
    (sign-string secret (str (:timestamp request) (:payload request)))))

(defrecord Authorizer [keystore])

(defn authorized-request? [{:keys [keystore]} request]
  (when-let [signature (request-signature keystore request)]
    (and (= (get-in request [:authorization :signature]) signature)
         (authorized-timestamp? (:timestamp request)))))

This solution will provide a good layer of security while being secure enough for most practical purposes. Going one step further would involve guaranteeing no replay attack can be performed by handing-out a one-time token for each request. We will not describe this scheme in this article.

While complete, the solution is now hard to test, since it relies on a wall clock. There are three approaches to testing we can take:

  • Good old sleep calls which are a safe way of having spurious test errors :-)
  • Mocking wall clock calls
  • Making the clock a component

It does seem overkill to build a specific clock component for the standard behavior of a wall clock which just reaches out to the system.

(defprotocol Clock  (now! [this]))
(defrecord WallClock [] Clock (now! [this] (System/currentTimeMillis)))

With this simple protocol we can now build our complete component system. This will be quite similar to the previous presented implementation, with the exception that the Authorizer component now depends on clock as well and will use both in authorized-request?.

(defn authorized-timestamp? [clock timestamp]
  (<= (- timestamp max-delta-ms) (now! clock) (+ timestamp max-delta-ms)))

(defn request-signature [keystore request]
  (when-let [secret (get keystore (get-in request [:authorization :api-key]))]
    (sign-string secret (str (:timestamp request) (:payload request)))))

(defrecord Authorizer [clock keystore])

(defn authorized-request? [{:keys [keystore clock]} request]
  (when-let [signature (request-signature keystore request)]
    (and (= (get-in request [:authorization :signature]) signature)
         (authorized-timestamp? clock (:timestamp request)))))

Our resulting system will thus be a three-component one:

  • A *clock* component which will give the current time.
  • A *keystore* component to look-up the secret key corresponding to an API key.
  • An *authorizer* component, used to authorize incoming requests, relying on the two above components.

We can then imagine building the system like this:

(defn start-system [secret-keys]
   (-> (component/system-map :keystore   secret-keys
                             :clock      (->WallClock)
                             :authorizer (map->Authorizer {}))
       (component/system-using {:authorizer [:clock :keystore]})
       (component/start-system)))

With this, everything necessary for authorizing requests is available, but there are no tests yet. If we were to rely on this implementation for tests, we would have to play with timing for test purposes:

(deftest simple-signing
  (let [sys (start-system {:foo "ABCDEFGHIJK"})]
    (doseq [cmd ["start-engine" "thrust" "stop-engine"]]
      (let [request {:timestamp (now! (:clock sys))
                     :payload       cmd
                     :authorization {:api-key :foo}}
            signed  (assoc-in request [:authorization :signature]
                              (request-signature (:keystore sys) request))]
        (is (authorized-request? sys signed))
        (Thread/sleep 600)
        (is (not(authorized-request? sys signed)))))))

This is unfortunately brittle and does not lend itself easily to a large number of tests since it relies on sleep.

Thanks to our component-based approach we can now write an alternate clock:

(defrecord RefClock [state] Clock (now! [_] @state))

Once we have our new clock, we can adapt the start system function:

(defn start-system [secret-keys time]
   (-> (component/system-map :keystore   secret-keys
                             :clock      (if time (->RefClock time) (->WallClock))
                             :authorizer (map->Authorizer {}))
       (component/system-using {:authorizer [:clock :keystore]})
       (component/start-system)))

This new clock can then be used for our tests, doing away with brittle sleep calls and paving the way for generative tests.

(deftest simple-signing
  (let [time (atom 0)
        sys  (start-system {:foo "ABCDEFGHIJK"} time)]
    (doseq [cmd ["start-engine" "thrust" "stop-engine"]]
      (let [request {:timestamp (now! (:clock sys))
                     :payload       cmd
                     :authorization {:api-key :foo}}
            signed  (assoc-in request [:authorization :signature]
                              (request-signature (:keystore sys) request))]
        (is (authorized-request? sys signed))
        (swap! time + max-delta-ms 1) 
        (is (not(authorized-request? sys signed)))))))

While this is nice, it only tests a very small subset of input. To go beyond this, we can reach out to clojure.spec to give us compile-time guarantees that we are using correct types for our functions and to allow building generative tests.

In a few instances, we help generators by providing a set of known values. We start off by forcing every generated keystore instance to be:

{:foo "ABCDEFGH"
 :bar "IJKLMNOP"}

Generated api-key instances will also always be either :foo or :bar. Clock instance generation is bound to a RefClock instance as well.

Let’s look at the code in detail. We start by defining a few predicates to make our specs a bit easier to understand:

(def lookup?           #(instance? clojure.lang.ILookup %))
(def clock?            #(satisfies? Clock %))
(def not-empty-string? #(not= "" %))
(def sig-bytes?        #(= 32 (count %))) ;; Number of bytes in a signature
(def valid-sig-width?  #(= 64 (count %)))
(def valid-sig-chars?  #(re-matches #"^[0-9a-f]+$" %))

Next we can define data types for every plain and compound type we have created:

(s/def ::keystore lookup?)
(s/def ::clock clock?)
(s/def ::authorizer (s/keys :req-un [::keystore ::clock]))
(s/def ::signature (s/and string? valid-sig-width? valid-sig-chars?))
(s/def ::api-key keyword?)
(s/def ::authorization (s/keys :req-un [::api-key] :opt-un [::signature]))
(s/def ::timestamp int?)
(s/def ::secret-key (s/and string? not-empty-string?))
(s/def ::payload (s/and string? not-empty-string?))
(s/def ::request (s/keys :req-un [::timestamp ::payload ::authorization]))
(s/def ::bytes (s/and bytes? sig-bytes?))

I like to also provide separate specs for argument lists:

(s/def ::auth-request? (s/cat :authorizer ::authorizer :request ::request))
(s/def ::request-signature (s/cat :keystore ::keystore :request ::request))
(s/def ::auth-timestamp? (s/cat :clock ::clock :timestamp ::timestamp))
(s/def ::sign-string (s/cat :secret-key ::secret-key :payload string?))
(s/def ::bytes->hex (s/cat :bytes ::bytes))
(s/def ::now! (s/cat :block ::clock))

We can now use the above types to specify our functions. Nothing extraordinary here if you have already used spec.

(s/fdef bytes->hex :args ::bytes->hex :ret ::signature)
(s/fdef sign-string :args ::sign-string :ret ::signature)
(s/fdef now! :args ::now! :ret ::timestamp)
(s/fdef authorized-timestamp? :args ::auth-timestamp? :ret boolean?)
(s/fdef request-signature :args ::request-signature :ret ::signature)
(s/fdef authorized-request? :args ::auth-request? :ret boolean?)

We are now fully specified and using instrument will allow verifying functions are called properly.

The complex bit is to go from here to tests which use generators for building sensible data. Relying on the provided generators will not cut it as they would not be able to build clock and keystore instances, nor would they be able to provide sensible timestamp or signature values.

This is most obvious in request which contains co-dependent information, since the :signature field in the :authorization map depends on the payload and timestamp of the request. Likewise, testing authorized-timestamp? relies on having a solid way of generating timestamp, which we built our Clock protocol for.

Fortunately, spec allows overriding generators. We can start by building simple generators for values we want picked from a narrow set, this is for instance the case for our keystore and related api keys:

(def fake-keystore {:foo "ABCDEFGH" :bar "IJKLMNOP"})
(def fake-time     (atom 0))
(def fake-clock    (->RefClock fake-time))

(defn keystore-gen [] (s/gen #{fake-keystore}))
(defn api-key-gen  [] (s/gen (set (keys fake-keystore))))
(defn clock-gen    [] (s/gen #{fake-clock}))

We can test out this generators on the repl:

(gen/sample (s/gen (s/with-gen ::api-key api-key-gen)))
(gen/sample (s/gen (s/with-gen ::clock-gen clock-gen)))

To instrument bytes->hex we will need a way of generating 32 wide byte arrays. Since there is no such generator, we will need to compose the creation of a 32-width vector and its coercion to a byte array:

(defn bytes-gen    [] (gen/fmap byte-array (gen/vector tgen/byte 32)))

In the above we use byte from clojure.test.check.generators since no such generator exists in clojure.spec.gen.

Only the most complex generator remains, request-gen for building request maps. If we look at our base building blocks, here is what we need to build a correct request map:

  • A *keystore* to sign the request
  • A *clock* to get a correct timestamp
  • A random *api-key*
  • A random *payload*

Once we have these elements we can transform them into a correct request. We will use fmap again here, and split out request generation in two functions:

(defn sign-request [[ks req]]
   (assoc-in req [:authorization :signature] (request-signature ks req)))

(defn build-request [{:keys [clock payload keystore api-key]}]
  (vector
    keystore
    {:timestamp     (now! clock)
     :payload       payload
     :authorization {:api-key api-key}}))

(defn request-gen []
  (gen/fmap
    (comp sign-request build-request)
    (s/gen (s/keys :req-un [::clock ::keystore ::api-key ::payload])
           {::clock clock-gen ::keystore keystore-gen ::api-key api-key-gen})))

We now have a solid way of generating requests, we can again test it on the repl:

(gen/sample (s/gen (s/with-gen ::request request-gen)))

Now that we have good generation available, we can write automated testing for all of our functions. We can do this by enumerating all testable symbols in the current namespace and running generative testing on them, supplying our list of generator overrides. This involves checking that the result is true for all test outputs generated by clojure.spec.test/check:

(def gen-overrides {::keystore      keystore-gen
                    ::clock         clock-gen
                    ::api-key       api-key-gen
                    ::bytes         bytes-gen
                    ::request       request-gen})

(deftest generated-tests
  (doseq [test-output (-> (st/enumerate-namespace 'request.signing)
                          (st/check {:gen gen-overrides}))]
    (testing (-> test-output :sym name)
      (is (true? (-> test-output :clojure.spec.test.check/ret :result))))))

To go, one last step further, we can supply a different function spec to our most important function, authorized-request? to make sure that given all provided inputs, our authorizer determined the request to be authorized:

(deftest specialized-tests
   (testing "authorized-request?"
      (is (true? (-> (st/check-fn authorized-request?
                                  (s/fspec :args ::auth-request? :ret boolean?)
                                  {:gen gen-overrides})
                     :clojure.spec.test.check/ret
                     :result)))))

Last, we run all tests:

(run-tests 'request.signing)

I’d like to thank Max Penet and Gary Fredericks for their valuable input while writing this.


          Building an atomic database with clojure   

Atoms provide a way to hold onto a value in clojure and perform thread-safe transitions on that value. In a world of immutability, they are the closest equivalent to other languages’ notion of variables you will encounter in your daily clojure programming.

Storing with an atom

One of the frequent uses of atoms is to hold onto maps used as some sort of cache. Let’s say our program stores a per-user high score for a game.

To store high-scores in memory atoms allow us to implement things very quickly:

(ns game.scores
  "Utilities to record and look up game high scores")

(defn make-score-db
  "Build a database of high-scores"
  []
  (atom nil))

(def compare-scores
  "A function which keeps the highest numerical value.
   Handles nil previous values."
  (fnil max 0))

(defn record-score!
  "Record a score for user, store only if higher than
   previous or no previous score exists"
  [scores user score]
  (swap! scores update user compare-scores score))

(defn user-high-score
  "Lookup highest score for user, may yield nil"
  [scores user]
  (get @scores user))

(defn high-score
  "Lookup absolute highest score, may yield nil
   when no scores have been recorded"
  [scores]
  (last (sort-by val @scores)))

In the above we have put together a very simple record mechanism, which through the use of defonce keeps scores across application reloads or namespace reevaluations. Ideally this should be provided as a component, but for the purposes of this post we will keep things as simple as possible.

Using the namespace works as expected:

(def scores (make-score-db))
(high-score scores)         ;; => nil
(user-high-score scores :a) ;; => nil
(record-score! scores :a 2) ;; => {:a 2}
(record-score! scores :b 3) ;; => {:a 2 :b 3}
(record-score! scores :b 1) ;; => {:a 2 :b 3}
(record-score! scores :a 4) ;; => {:a 4 :b 3}
(user-high-score scores :a) ;; => 4
(high-score scores)         ;; => [:a 4]

Atom persistence

This is all old news to most. What I want to showcase here is how the add-watch functionality on top of atoms can help serializing atoms like these.

First lets consider the following:

  • We want to store our high-scores state to disk
  • The content of high-scores contains no unprintable values

It is thus straightforward to write a serializer and deserializer for such a map:

(ns game.serialization
  "Serialization utilities"
   (:require [clojure.edn :as edn]))

(defn dump-to-path
  "Store a value's representation to a given path"
  [path value]
  (spit path (pr-str value)))

(defn load-from-path
  "Load a value from its representation stored in a given path.
   When reading fails, yield nil"
  [path]
  (try
    (edn/read-string (slurp path))
    (catch Exception _)))

This also works as expected:

(dump-to-path "/tmp/scores.db"
  {:a 0 :b 3 :c 3 :d 4})          ;; => nil
(load-from-path "/tmp/scores.db") ;; => {:a 0 :b 3 :c 3 :d 4}

With these two separate namespaces, we are now left figuring out how to persist our high-score database. To be as faithful as possible, we will avoid techniques such as doing regular snapshots. Instead we will reach out to add-watch which has the following signature (add-watch reference key fn) and documentation:

Adds a watch function to an agent/atom/var/ref reference. The watch fn must be a fn of 4 args: a key, the reference, its old-state, its new-state. Whenever the reference’s state might have been changed, any registered watches will have their functions called. The watch fn will be called synchronously, on the agent’s thread if an agent, before any pending sends if agent or ref. Note that an atom’s or ref’s state may have changed again prior to the fn call, so use old/new-state rather than derefing the reference. Note also that watch fns may be called from multiple threads simultaneously. Var watchers are triggered only by root binding changes, not thread-local set!s. Keys must be unique per reference, and can be used to remove the watch with remove-watch, but are otherwise considered opaque by the watch mechanism.

Our job is thus to write a 4 argument function of the atom itself, a key to identify the watcher, the previous and new state.

To persist each state transition to a file, we can use our dump-to-path function above as follows:

(defn persist-fn
  "Yields an atom watch-fn that dumps new states to a path"
  [path]
  (fn [_ _ _ state]
    (dump-to-path path state)))

(defn file-backed-atom
   "An atom that loads its initial state from a file and persists each new state
    to the same path"
   [path]
   (let [init  (load-from-path path)
         state (atom init)]
     (add-watch state :persist-watcher (persist-fn path))
     state))

Wrapping up

The examples above can now be exercized using our new file-backed-atom function:

(def scores (file-backed-atom "/tmp/scores.db"))
(high-score scores)         ;; => nil
(user-high-score scores :a) ;; => nil
(record-score! scores :a 2) ;; => {:a 2}
(record-score! scores :b 3) ;; => {:a 2 :b 3}
(record-score! scores :b 1) ;; => {:a 2 :b 3}
(record-score! scores :a 4) ;; => {:a 4 :b 3}
(user-high-score scores :a) ;; => 4
(high-score scores)         ;; => [:a 4]

The code presented here is available here


          Simple materialized views in Kafka and Clojure   

A hands-on dive into Apache Kafka to build a scalable and fault-tolerant persistence layer.

With its most recent release, Apache Kafka introduced a couple of interesting changes, not least of which is Log Compaction, in this article we will walk through a simplistic use case which takes advantage of it.

Log compaction: the five minute introduction.

I won’t extensively detail what log compaction is, since it’s been thoroughly described. I encourage readers not familiar with the concept or Apache Kafka in general to go through these articles which give a great overview of the system and its capabilities:

In this article we will explore how to build a simple materialized view from the contents of a compacted kafka log. A working version of the approach described here can be found at https://github.com/pyr/kmodel and may be used as a companion while reading the article.

If you’re interested in materialized views, I warmly recommend looking into Apache Samza and this Introductory blog-post by Martin Kleppmann.

Overall architecture

For the purpose of this experiment, we will consider a very simple job board application. The application relies on a single entity type: a job description, and either does per-key access or retrieves the whole set of keys.

Our application will perform every read from the materialized view in redis, while all mutation operation will be logged to kafka.

log compaction architecture

In this scenario all components may be horizontally scaled. Additionaly the materialized view can be fully recreated at any time, since the log compaction ensures that at least the last state of all live keys are present in the log. This means that by starting a read from the head of the log, a consistent state can be recreated.

Exposed API

A mere four rest routes are necessary to implement this service:

  • GET /api/job: retrieve all jobs and their description.
  • POST /api/job: insert a new job description.
  • PUT /api/job/:id: modify an existing job description.
  • DELETE /api/job/:id: remove a job description.

We can map this REST functionality to a clojure protocol - the rough equivalent of an interface in OOP languages - with a mere 4 signatures:

(defprotocol JobDB
  "Our persistence protocol."
  (add! [this payload] [this id payload] "Upsert entry, optionally creating a key")
  (del! [this id] "Remove entry.")
  (all [this] "Retrieve all entries."))

Assuming this protocol is implemented, writing the HTTP API is relatively straightforward when leveraging tools such as compojure in clojure:

(defn api-routes
  "Secure, Type-safe, User-input-validating, Versioned and Multi-format API.
   (just kidding)"
  [db]
  (->
   (routes
    (GET    "/api/job"     []           (response (all db)))
    (POST   "/api/job"     req          (response (add! db (:body req))))
    (PUT    "/api/job/:id" [id :as req] (response (add! db id (:body req))))
    (DELETE "/api/job/:id" [id]         (response (del! db id)))
    (GET    "/"            []           (redirect "/index.html"))

    (resources                          "/")
    (not-found                          "<html><h2>404</h2></html>"))

   (json/wrap-json-body {:keywords? true})
   (json/wrap-json-response)))

I will not describe the client-side javascript code used to interact with the API in this article, it is a very basic AngularJS application.

Persistence layer

Were we to use redis exclusively, the operation would be quite straightforward, we would rely on a redis set to contain the set of all known keys. Each corresponding key would contain a serialized job description.

In terms of operations, this would mean:

  • Retrieval, would involve a SMEMBERS of the jobs key, then mapping over the result to issue a GET.
  • Insertions and updates could be merge into a single “Upsert” operation which would SET a key and would then add the key to the known set through a SADD command.
  • Deletions would remove the key from the known set through a SREM command and would then DEL the corresponding key.

Let’s look at an example sequence of events

log compaction events

As it turns out, it is not much more work when going through Apache Kafka.

  1. Persistence interaction in the API

    In the client, retrieval happens as described above. This example code is in the context of the implementation - or as clojure would have it reification - of the above protocol.

    (all [this]
      ;; step 1. Fetch all keys from set
      (let [members (redis/smembers "jobs")] 
         ;; step 4. Merge into a map
         (reduce merge {}      
           ;; step 2. Iterate on all keys
           (for [key members]  
             ;; step 3. Create a tuple [key, (deserialized payload)]
             [key (-> key redis/get edn/read-string)]))))
    

    The rest of the operations emit records on kafka:

    (add! [this id payload]
      (.send producer (record "job" id payload)))
    (add! [this payload]
      (add! this (random-id!) payload))
    (del! [this id]
      (.send producer (record "job" id nil))))))
    

    Note how deletions just produce a record for the given key with a nil payload. This approach produces what is called a tombstone in distributed storage systems. It will tell kafka that prior entries can be discarded but will keep it for a configurable amount of time to ensure coordination across consumers.

  2. Consuming persistence events

    On the consumer side, the approach is as described above

    (defmulti  materialize! :op)
    
    (defmethod materialize! :del
      [payload]
      (r/srem "jobs" (:key payload))
      (r/del (:key payload)))
    
    (defmethod materialize! :set
      [payload]
      (r/set (:key payload) (pr-str (:msg payload)))
      (r/sadd "jobs" (:key payload)))
    
    (doseq [payload (messages-in-stream {:topic "jobs"})]
      (let [op (if (nil? (:msg payload) :del :set))]
        (materialize! (assoc payload :op op))))
    

Scaling strategy and view updates

Where things start to get interesting, is that with this approach, the following becomes possible:

  • The API component is fully stateless and can be scaled horizontally. This is not much of a break-through and is usually the case.
  • The redis layer can use a consistent hash to shard across several instances and better use memory. While this is feasible in a more typical scenario, re-sharding induces a lot of complex manual handling. With the log approach, re-sharding only involves re-reading the log.
  • The consumer layer may be horizontally scaled as well

Additionally, since a consistent history of events is available in the log, adding views which generate new entities or ways to look-up data now only involve adapating the consumer and re-reading from the head of the log.

Going beyond

I hope this gives a good overview of the compaction mechanism. I used redis in this example, but of course, materialized views may be created on any storage backends. But in some cases even this is unneeded! Since consumers register themselves in zookeeper, they could directly expose a query interface and let clients contact them directly.


          magento 2 review by maldiveboy   
i want to locate magento 2 review table from database (Budget: $10 - $30 USD, Jobs: Magento)
          Neat Trick: using Puppet as your internal CA   

It’s a shame that so many organisations rely on HTTP basic-auth and self-signed certs to secure access to internal tools. Sure enough, it’s quick and easy to deploy, but you get stuck in a world where:

  • Credentials are scattered and difficult to manage
  • The usability of some tools gets broken
  • Each person coming in or out of the company means

either a sweep of the your password databases or a new attack surface.

The only plausible cause for this state of affairs is the perceived complexity of setting up an internal PKI infrastructure. Unfortunately, this means passing out on a great UI-respecting authentication and - with a little plumbing - authorization scheme.

Once an internal CA is setup you get the following benefits:

  • Simplified securing of any internal website
  • Single-Sign-On (SSO) access to sites
  • Easy and error-free site-wide privilege revocation
  • Securing of more than just websites but any SSL aware service

Bottom line, CAs are cool

The overall picture of a PKI

CAs take part in PKI - Public Key Infrastructure - A big word to designate a human and/or automated process to handle the lifecycle of digital certificates within an organisation.

When your browser accesses an SSL secured site, it will verify the presented signature against the list of stored CAs it holds.

Just like any public and private key pairs, the public part can be distributed by any means.

The catch

So if internal CAs have so many benefits, how come no one uses them ? Here’s the thing, tooling plain sucks. It’s very easy to get lost in a maze of bad openssl command-line options when you first tackle the task, or get sucked in the horrible CA.pl which lives in /etc/ssl/ca/CA.pl on many systems.

So the usual process is: spend a bit of time crafting a system that generates certificates, figure out too late that serials must be factored in from the start to integrate revocation support, start over.

All this eventually gets hidden behind a bit of shell script and ends up working but is severely lacking.

The second reason is that, in addition to tooling issues, it is easy to get bitten and use them the wrong way: forgot to include a Certificate Revocation List (CRL) with your certificate ? You have no way of letting your infrastructure know someone left the company ! You’re not monitoring the expiry of certificates ? Everybody gets locked out (usually happens over a weekend).

A word on revocation

No CA is truly useful without a good scheme for revocation. There are two ways of handling it:

  • Distributing a Certificate Revocation List (or CRL), which is a plain list of serials that have been revoked.
  • Making use of a Role Based Access Control (or RBAC) server, which lives at an address bundled in the certificate which clients can connect to to validate.

If you manage a small number of services and have a configuration management framework or build your own packages, relying on a CRL is valid and will be the mechanism described in this article.

The ideal tool

Ultimately, what you’d expect from a CA managing tool is just a way to get a list of certs, generate them and revoke them.

Guess what ? Chances are you already have an internal CA !

If you manage your infrastructure with a configuration management framework - and you should - there’s a roughly 50% chance that you are using puppet.

If you do, then you already are running an internal CA, since that is what the puppet master process is using to authenticate nodes contacting it.

When you issue your first puppet run against the master, a CSR (certificate signing request) is generated against the master’s CA, depending on the master’s policy it will be either automatically signed or stored, in which case it will show up in the output of the puppet cert list command. CSRs can then be signed with puppet cert sign.

But there is nothing special to these certificates, puppet cert just exposes a nice facade to a subset of OpenSSL’s functionality.

What if I dont’ use puppet

The CA part of puppet’s code stands on it’s own and by installing puppet through apt-get, yum, or gem you will get the functionality without needing to start any additional service on your machine.

Using the CA

Since your CA isn’t a root one, it needs to be registered wherever you will need to validate certs. Usually this just means installing it in your browser. The CA is nothing more than a public key and can be distributed as is.

For the purpose of this article, puppet wil be run with a different configuration to avoid interfering with its own certificates. This means adding a --confdir to every command you issue.

A typical set-up

To illustrate how to set up a complete solution using the puppet commmand line tool, we will assume you have three separate sites to authenticate:

  • Your internal portal and documentation site: doc.priv.example.com
  • Graphite: graph.priv.example.com
  • Kibana: logs.priv.example.com

This set-up will be expected to handle authentication on behalf of graphite, the internal portal and kibana.

Although a CA can be published to several servers, in this mock infrastructure, a single nginx reverse proxy is used to redirect traffic to internal sites.

infrastructure

Setting up your CA

First things first, lets provide an isolated sandbox for puppet to handle its certificates in.

I’ll assume you want all certificate data to live in /etc/ssl-ca. Start by creating the directory and pushing the following configuration in /etc/ssl-ca/puppet.conf

[main]
logdir=/etc/ssl-ca/log
vardir=/etc/ssl-ca/data
ssldir=/etc/ssl-ca/ssl
rundir=/etc/ssl-ca/run

Your now ready to generate your initial environment with:

puppet cert --configdir /etc/ssl-ca list

At this point you have generated a CA, and you’re ready to generate new certificates for your users.

Although certs can be arbitrarily named, I tend to stick to a naming scheme that matches the domain the sites it runs on, in this case, we could go with users.priv.example.com.

We have three users in the organisation: Alice, Bob and Charlie, lets give them each a certificate and one for each service we will run.

for admin in alice bob charlie; do
puppet cert --configdir /etc/ssl-ca generate ${admin}.users.priv.example.com
done

for service in doc build graph; do
puppet cert --configdir /etc/ssl-ca generate ${service}.priv.example.com
done

Your users now all have a valid certificate. Two steps remain: using the CA on the HTTP servers, and installing the certificate on the users’ browsers.

For each of your sites, the following SSL configuration block can be used in nginx:

ssl on;
ssl_verify_client on;
ssl_certificate '/etc/ssl-ca/ssl/certs/doc.priv.example.com.pem';
ssl_certificate_key '/etc/ssl-ca/private_keys/doc.priv.example.com.pem';
ssl_crl '/etc/ssl-ca/ssl/ca/ca_crl.pem';
ssl_client_certificate '/etc/ssl/ssl/ca/ca_crt.pem';
ssl_session_cache 'shared:SSL:128m';

A few notes on the above configuration:

  • ssl_verify_client on instructs the web server to only allow traffic for which a valid client certificate was presented.
  • read up on ssl_session_cache to decide which strategy works for you.
  • do not be fooled by the directive name, ssl_client_certificate points to the certificate used to authenticate client certificates with.

Installing the certificate on browsers

Now that servers are ready to authenticate incoming clients, the last step is to distribute certificates out to clients. The ca~crt~.pem and client cert and key could be given as-is, but browsers usually expect the CA and certificate to be bundled in a PKCS12 file.

For this, a simple script will do the trick, this one would expect the name of the generated user’s certificate and a password, adapt to your liking:

#!/bin/sh

name=$1
password=$2
domain=example.com
ssl_dir=/etc/ssl-ca/ssl
cert_name=`echo $name.$domain`
mkdir -p $ssl_dir/pkcs12

openssl pkcs12 -export -in $ssl_dir/certs/$full_name.pem -inkey         \
  $ssl_dir/private_keys/$full_name.pem -certfile $ssl_dir/ca/ca_crt.pem \
  -out $ssl_dir/pkcs12/$full_name.p12 -passout pass:$password

The resulting file can be handed over to your staff who will then happily access services

Handling Revocation

Revocation is a simple matter of issuing a puppet cert revoke command and then redistributing the CRL file to web servers. As mentionned earlier I would advise distributing the CRL as an OS package, which will let you quickly deploy updates and ensure all your servers honor your latest revocation list.


          Poor man's pattern matching in clojure   

A quick tip which helped me out in a few situations. I’d be inclined to point people to core.match for any matching needs, but the fact that it doesn’t play well with clojure’s ahead-of-time (AOT) compilation requires playing dirty dynamic namespace loading tricks to use it.

A common case I stumbled upon is having a list of homogenous records - say, coming from a database, or an event stream - and needing to take specific action based on the value of several keys in the records.

Take for instance an event stream which would contain homogenous records of with the following structure:

[{:user      "bob"
  :action    :create-ticket
  :status    :success
  :message   "succeeded"
  :timestamp #inst "2013-05-23T18:19:39.623-00:00"}
 {:user      "bob"
  :action    :update-ticket
  :status    :failure
  :message   "insufficient rights"
  :timestamp #inst "2013-05-23T18:19:40.623-00:00"}
 {:user      "bob"
  :action    :delete-ticket
  :status    :success
  :message   "succeeded"
  :timestamp #inst "2013-05-23T18:19:41.623-00:00"}]

Now, say you need do do a simple thing based on the output of the value of both :action and :status.

The first reflex would be to do this within a for or doseq:

(for [{:keys [action status] :as event}]
   (cond
     (and (= action :create-ticket) (= status :success)) (handle-cond-1 event)
     (and (= action :update-ticket) (= status :success)) (handle-cond-2 event)
     (and (= action :delete-ticket) (= status :failure)) (handle-cond-3 event)))

This is a bit cumbersome. A first step would be to use the fact that clojure seqs and maps can be matched, by narrowing down the initial event to the matchable content. juxt can help in this situation, here is its doc for reference.

Takes a set of functions and returns a fn that is the juxtaposition of those fns. The returned fn takes a variable number of args, and returns a vector containing the result of applying each fn to the args (left-to-right).

I suggest you play around with juxt on the repl to get comfortable with it, here is the example usage we’re interested in:

(let [narrow-keys (juxt :action :status)]
   (narrow-keys {:user      "bob"
                 :action    :update-ticket
                 :status    :failure
                 :message   "insufficient rights"
                 :timestamp #inst "2013-05-23T18:19:40.623-00:00"}))
 => [:update-ticket :failure]

Given that function, we can now rewrite our condition handling code in a much more succint way:

(let [narrow-keys (juxt :action :status)]
  (for [event events]
    (case (narrow-keys event)
      [:create-ticket :success] (handle-cond-1 event)
      [:update-ticket :failure] (handle-cond-2 event)
      [:delete-ticket :success] (handle-cond-3 event))))

Now with this method, we have a perfect candidate for a multimethod:

(defmulti handle-event (juxt :action :status))
(defmethod handle-event [:create-ticket :success]
   [event]
   ...)
(defmethod handle-event [:update-ticket :failure]
   [event]
   ...)
(defmethod handle-event [:delete-ticket :success]
   [event]
   ...)

Of course, for more complex cases and wildcard handling, I suggest taking a look at core.match.


          Another year of Clojure   

Clojure at paper.li

I’ve been involved with clojure almost exclusively for a year as smallriver’s lead architect, working on the paper.li product and wanted to share my experience of clojure in the real world.

I had a previous experience with clojure where I put it to work where ruby on rails wasn’t a natural fit, and although smallrivers is a close neighbor of typesafe in Switzerland, my previous experience with the language made it prevail on scala.

Why clojure ?

While working on the backend architecture at a previous company I decided to evaluate three languages which met the needs I was faced with:

  • erlang
  • scala
  • clojure

I decided to tackle the same simple task in all three languages and see how each would fare and how I felt about them. The company’s language at that time was Ruby and JS, and coming from a C background, I wanted a language which provided simplicity, good data structure support and concurrency features, while allowing us to still code quickly.

While naturally drawn to Erlang, I quickly had to set it apart because the stack that was starting to emerge at the time had JVM based parts and would benefit greatly from a language targetting the JVM. I was a bit bummed because some tools in the erlang world were very exciting and the lightweight actors were interesting for a part of our stack.

Scala made a very strong first impression on me, but in practice I was taken aback by some aspects of it: the lack of coherency of open source projects found on the net in terms of style, which made it hard to see which best practices and guidelines would have to be taught to the team, some of the code I found was almost reminiscent of perl a few year back, in the potential it had to become unmaintainable some time later. The standard build tool - SBT - also made a very weak impression. It seemed to be a clear step back from maven which given the fact that maven isn’t a first class citizen in the scala world seemed worrying.

Clojure took the cake, in part because it clicked with the lisper in me, in part because the common idioms that emerged from the code I read bore a lot of similarity with the way we approached ruby. The dynamic typing promised succinct code and the notation for vectors, maps and sets hugely improved the readability of lisp - look at how hashes work in emacs lisp if you want to know what i mean. I was very excited about dosync and a bit worried by the lack of leightweight erlang style actors even though I could see how agent’s could help in that regard. As I’ll point out later on, we ended up not using these features at all anyhow.

The task at hand

When I joined Smallrivers to work on paper.li, it became natural to choose clojure. The team was small and I felt comfortable with it. There was a huge amount of work which needed to be started quickly so a “full-stack” language was necessary to avoid spreading across too many languages and technologies, and another investigation in how the other languages had evolved in the meantime was not possible. The main challenges to tackle were:

  • Being able to aggregate more content
  • Improve the quality of the processing done on content
  • Scaling the storage cluster accordingly
  • Automate the infrastructure

The “hiring” problem

One thing that always pops up in discussions about somewhat marginal languages is the hiring aspect, and the fear that you won’t be able to find people if you “lock” yourself in a language decision that strays from the usual suspects. My experience is that when you tackle big problems, that go beyond simple execution but require actual strong engineers, hiring will be a problem, there’s just no way around it. Choosing people that fit your development culture and see themselves fit to tackle big problems is a long process, integrating them is also time consuming. In that picture, the chosen language isn’t a huge deciding factor.

I see marginal languages as a problem in the following organisations:

  • Companies tackling smaller problems, or problems already solved. These are right in choosing standard languages, if I built a team to build an e-commerce site I wouldn’t go to clojure.
  • Larger companies which want their employees to jump from project to project, which makes sense from a managerial standpoint.

What we built

The bulk of what was done revolves around these functional items:

  • A platform automation tool, built on top of pallet.
  • Clojure facades for the tools relied upon (elastic search, cassandra, redis, kafka).
  • An ORM-type layer on top of cassandra
  • Our backend pipelines
  • A REST API

I won’t go in too much detail on our in-house code, but rather reflect on how things went over.

Coding style and programming “culture”

One of the advantages of lisp, is that it doesn’t have much syntax to go around, so our rules stay simple:

  • the standard 2 space indent
  • we try to stick to 80 columns, because i’m that old
  • we always use require except for: clojure.tools.logging and pallet.thread-expr which are use’d
  • we avoid macros whenever possible
  • we use dynamically rebindable symbols

Of course we embraced non mutable state everywhere possible, which in our case is almost everywhere. Whenever we need to checkpoint state, it usually goes to our storage layer, not to in memory variables.

When compared to languages such as C, I was amazed at how little rules are needed to enforce a consistent code look across projects, with very little time needed to dive into a part written by someone else.

The tools

  1. Local environment

    We didn’t settle on a unique tool-suite at the office, when picking up clojure I made the move from vim to emacs because the integration is better and I fell in love with paredit. Spread amongst the rest of team, textmate, eclipse and intellij were used.

    For building projects, leiningen was an obvious choice. I think leiningen is a great poster child for the greatest in clojure. A small and intelligent facade on top of maven, hiding all the annoying part of maven while keeping the nice distribution part.

    For continuous integration, we wrote a small bridge between leiningen and zi lein-zi which outputs pom.xml for maven, which are then used to build the clojure projects. We still hope to find some time to write a leiningen plugin for jenkins.

  2. Asynchronous programming

    Since a good part of what paper.li does relies on aggregation, async programming is very important. In the pure clojure world, the only real choice for async programming is lamina and aleph. To be honest, aleph turned out to be quite the challenge, a combination of the amount of outbound connections that our work requires and the fact that aleph seems to initially target servers more than clients.

    Fortunately Zach Tellman put a lot of work into the library throughout last year and recent releases are more reliable. One very nice side effect of using a lisp to work with evented code is how readable code becomes, by retaining a sync like look.

    For some parts we still would directly go to a smaller netty facade if we were to start over, but that’s a direct consequence of how much we learned along the way.

  3. Libraries not frameworks

    A common mantra in the clojure development community is that to ease integration the focus should be on libraries, not frameworks. This shows in many widespread projects such as compojure, pallet, and a host of common clojure tools. This proved very useful to us as clients of these libraries, allowing easy composition. I think pallet stands out most in that regard. Where most configuration management solutions offer a complete framework, pallet is just a library offering machine provisioning, configuration and command and control, which allowed us to integrate it with our app and build our abstractions on top of it.

    We tried to stick to that mantra in all of our work, building many small composable libraries, we made some errors at the beginning, by underutilizing some of clojure features, such as protocols but we now have good dynamics for writing these libraries, by writing the core of them with as little dependencies as possible, describing the behavior through protocols, and then writing add-ons which bring in additional dependencies and implement the protocol.

  4. Macros and DSLs

    Another common mantra is to avoid overusing macros. It can’t be overstated how easy they make things though, our entity description library (which we should really prep up for public release, we’ve been talking about it for too long now) allows statements such as these (simplified):

    (defentity :contributors
      (column :identifier (primary-key))
      (column :type (required))
      (column :name)
      (column :screen_name (index))
      (column :description)
      (column :detail (type :compound))
    
      (column :user_url)
      (column :avatar_url)
      (column :statuses_count (type :number))
    
      (has-many :articles)
      (has-many :editions (referenced false) (ttl 172800))
      (has-many :posts (key (timestamp :published_at)) (referenced false)))
    

    The power of DSLs in clojure cannot be understated, with a few macros you can easily build full languages, allowing easy extending of the functionality. Case in point, extracting text from articles, like most people we rely on a generic readability type library, but we also need to handle some sites that need special handling. By using a small DSL you can easily push rules that look like (simplified):

    (defsiterule "some.obscure.site"
       [dom]
       (-> dom
           (pull "#stupid-article-id")))
    

    The great part is that you limit the knowledge to be transfered over to people writing the rules, you avoid intrusive changes to the core of your app and these can safely be pulled from an external location.

    At the end of the day, it seems to me as though the part of the clojure community that came from CL had awful memories of macros making code unreadable, but when sticking to macros with a common look and feel, i.e: with-<resource>, def<resource> type macros, there are huge succintness take aways without hindering readability or maintenance of the code.

  5. Testing

    Every respectable codebase is going to need at least a few test. I’m of the pragmatist church, and straight out do not believe in TDD, neither in crazy coverage ratios. Of course we still have a more that 95% unit test coverage and the decoupled approach preached by clojure’s original developer, rich hickey1 allows for very isolated testing. For cases that require mocking, midge provides a nice framework and using it has created very fruitful throughout our code.

  6. Concurrency, Immutable State and Data Handling

    Funnily, we ended up almost never using any concurrency feature, not a single dosync made it in our codebase, few atom’s and a single agent (in https://github.com/pyr/clj-statsd to avoid recreating a Socket object for each datagram sent). We also banned future usage to more closely control our thread pools. Our usage of atom’s is almost exclusively bound to things that are write once / read many, in some cases we’d be better off with rebindable dynamic symbols.

    We rely on immutable state heavily though, and by heavily I actually mean exclusively. This never was a problem across the many lines of code we wrote, and helped us keep a sane and decoupled code base.

    With facades allowing to represent database fields, queue entries, and almost anything as standard clojure data structures and with powerful functions to work on them, complex handling of a large amount of data is very easily expressed. For this we fell in love with several tools which made things even easier:

    • the threading operators -> and ->>
    • the pallet thread-expr library which brings branching in threaded operations: for->, when->, and so on
    • assoc-in, update-in, seq-utils/index-by and all these functions which allow easy transformation of data structs and retain a procedural look

    I cannot stress how helpful this has been for us in doing the important part of our code right and in a simple manner. This is clearly the best aspect of clojure as far as I’m concerned.

    Moreover, building on top of Java and with the current focus on “Big Data” everywhere, the interaction with large stores and tools to help building batch jobs are simply amazing, especially cascalog.

  7. The case of Clojurescript

    While very exciting we did not have a use for clojurescript, given the size of the existing JS codebase, and the willingness of the frontend developers to stick to a known.

    The simple existence of the project amazes me, especially with the promise of more runtimes, there are various implementations on top of lua, python and gambit (a scheme that compiles to C). With projects like cascalog, pallet, lein, compojure, noir and clojurescript, the ecosystem addresses all parts of almost any stack that you will be tempted to build and we didn’t encounter cases of feeling cornered by the use of clojure - admiteddly, most of the time, a Java library came to the rescue.

  8. The community

    The community is very active, and has not reach critical mass yet, which makes its mailing-list and irc room still usable. There are many influent public figures, some who bring insight, some who bring beautiful code. Most are very open and available to discussion which shaped our approach of the language and our way of coding along the way.

Closing words

It’s been an exciting year and we’re now a full fledged 80% clojure shop. I’m very happy with the result, more so with the journey. I’m sure we could have achieved with other languages as well. As transpires throughout the article, the whole team feels that should we start over, we would do it in clojure again.

It helped us go fast, adapt fast and didn’t hinder us in any way. The language seems to have a bright future ahead of it which is reassuring. I would encourage people coming from python and ruby to consider it as a transition language or as their JVM targetting language, since many habits are still valid in clojure and since it helps slightly change the way we look at problems which can then be reapplied in more “traditional” languages.


  1. Rich hickey’s talk simple made easy and his coining of the term “complecting” illustrates that http://www.infoq.com/presentations/Simple-Made-Easy

    [return]

          The death of the configuration file   

Taking on a new platform design recently I thought it was interesting to see how things evolved in the past years and how we design and think about platform architecture.

So what do we do ?

As system developers, system administrators and system engineers, what do we do ?

  • We develop software
  • We design architectures
  • We configure systems

But it isn’t the purpose of our jobs, for most of us, our purpose is to generate business value. From a non technical perspective we generate business value by creating a system which renders one or many functions and provides insight into its operation.

And we do this by developing, logging, configuration and maintaining software across many machines.

When I started doing this - back when knowing how to write a sendmail configuration file could get you a paycheck - it all came down to setting up a few machines, a database server a web server a mail server, each logging locally and providing its own way of reporting metrics.

When designing custom software, you would provide reports over a local AF_UNIX socket, and configure your software by writing elegant parsers with yacc (or its GNU equivalent, bison).

When I joined the OpenBSD team, I did a lot of work on configuration files, ask any members of the team, the configuration files are a big concern, and careful attention is put into clean, human readable and writable syntax, additionally, all configuration files are expected to look and feel the same, for consistency.

It seems as though the current state of large applications now demands another way to interact with operating systems, and some tools are now leading the way.

So what has changed ?

While our mission is still the same from a non technical perspective, the technical landscape has evolved and went through several phases.

  1. The first era of repeatable architecture

    We first realized that as soon as several machines performed the same task the need for repeatable, coherent environments became essential. Typical environments used a combination of cfengine, NFS and mostly perl scripts to achieve these goals.

    Insight and reporting was then providing either by horrible proprietary kludges that I shall not name here, or emergent tools such as netsaint (now nagios), mrtg and the like.

  2. The XML mistake

    Around that time, we started hearing more and more about XML, then touted as the solution to almost every problem. The rationale was that XML was - somewhat - easy to parse, and would allow developers to develop configuration interfaces separately from the core functionality.

    While this was a noble goal, it was mostly a huge failure. Above all, it was a victory of developers over people using their software, since they didn’t bother writing syntax parsers and let users cope with the complicated syntax.

    Another example was the difference between Linux’s iptables and OpenBSD’s pf. While the former was supposed to be the backend for a firewall handling tool that never saw the light of day, the latter provided a clean syntax.

  3. Infrastructure as code

    Fast forward a couple of years, most users of cfengine were fed up with its limitations, architectures while following the same logic as before became bigger and bigger. The need for repeatable and sane environments was as important as it ever was.

    At that point of time, PXE installations were added to the mix of big infrastructures and many people started looking at puppet as a viable alternative to cfengine.

    puppet provided a cleaner environment, and allowed easier formalization of technology, platform and configuration. Philosophically though, puppet stays very close to cfengine by providing a way to configure large amounts of system through a central repository.

    At that point, large architectures also needed command and control interfaces. As noted before, most of these were implemented as perl or shell scripts in SSH loops.

    On the monitoring and graphing front, not much was happening, nagios and cacti were almost ubiquitous, while some tools such as ganglia and collectd were making a bit of progress.

Where are we now ?

At some point recently, our applications started doing more. While for a long time the canonical dynamic web application was a busy forum, more complex sites started appearing everywhere. We were not building and operating sites anymore but applications. And while with the help of haproxy, varnish and the likes, the frontend was mostly a settled affair, complex backends demanded more work.

At the same time the advent of social enabled applications demanded much more insight into the habits of users in applications and thorough analytics.

New tools emerged to help us along the way:

  • In memory key value caches such as memcached and redis
  • Fast elastic key value stores such as cassandra
  • Distributed computing frameworks such as hadoop
  • And of course on demand virtualized instances, aka: The Cloud
  1. Some daemons only provide small functionality

    The main difference in the new stack found in backend systems is that the software stacks that run are not useful on their own anymore.

    Software such as zookeeper, kafka, rabbitmq serve no other purpose that to provide supporting services in applications and their functionality are almost only available as libraries to be used in distributed application code.

  2. Infrastructure as code is not infrastructure in code !

    What we missed along the way it seems is that even though our applications now span multiple machines and daemons provide a subset of functionality, most tools still reason with the machine as the top level abstraction.

    puppet for instance is meant to configure nodes, not cluster and makes dependencies very hard to manage. A perfect example is the complications involved in setting up configurations dependent on other machines.

    Monitoring and graphing, except for ganglia has long suffered from the same problem.

The new tools we need

We need to kill local configurations, plain and simple. With a simple enough library to interact with distant nodes, starting and stopping service, configuration can happen in a single place and instead of relying on a repository based configuration manager, configuration should happen from inside applications and not be an external process.

If this happens in a library, command & control must also be added to the mix, with centralized and tagged logging, reporting and metrics.

This is going to take some time, because it is a huge shift in the way we write software and design applications. Today, configuration management is a very complex stack of workarounds for non standardized interactions with local package management, service control and software configuration.

Today dynamically configuring bind, haproxy and nginx, installing a package on a Debian or OpenBSD, restarting a service, all these very simple tasks which we automate and operate from a central repository force us to build complex abstractions. When using puppet, chef or pallet, we write complex templates because software was meant to be configured by humans.

The same goes for checking the output of running arbitrary scripts on machines.

  1. Where we’ll be tomorrow

    With the ease PaaS solutions bring to developers, and offers such as the ones from VMWare and open initiatives such as OpenStack, it seems as though virtualized environments will very soon be found everywhere, even in private companies which will deploy such environments on their own hardware.

    I would not bet on it happening but a terse input and output format for system tools and daemons would go a long way in ensuring easy and fast interaction with configuration management and command and control software.

    While it was a mistake to try to push XML as a terse format replacing configuration file to interact with single machines, a terse format is needed to interact with many machines providing the same service, or to run many tasks in parallel - even though, admittedly , tools such as capistrano or mcollective do a good job at running things and providing sensible output.

  2. The future is now !

    Some projects are leading the way in this new orientation, 2011 as I’ve seen it called will be the year of the time series boom. For package management and logging, Jordan Sissel released such great tools as logstash and fpm. For easy graphing and deployment etsy released great tools, amongst which statsd.

    As for bridging the gap between provisionning, configuration management, command and control and deploys I think two tools, both based on jclouds1 are going in the right direction:

    • Whirr2: Which let you start a cluster through code, providing

    recipes for standard deploys (zookeeper, hadoop)

    • pallet3: Which lets you describe your infrastructure as code and

    interact with it in your own code. pallet’s phase approach to cluster configuration provides a smooth dependency framework which allows easy description of dependencies between configuration across different clusters of machines.

  3. Who’s getting left out ?

    One area where things seem to move much slower is network device configuration, for people running open source based load-balancers and firewalls, things are looking a bit nicer, but the switch landscape is a mess. As tools mostly geared towards public cloud services will make their way in private corporate environments, hopefully they’ll also get some of the programmable


          Some more thoughts on monitoring   

Lately, monitoring has been a trending topic from the devops crowd. ripienaar and Jason Dixon amongst other have voiced what many are thinking. They’ve done a good job describing what’s wrong and what sort of tool the industry needs. They also express clearly the need to part from a monolithic supervision solution and monolithic graphing solution.

I’ll take my shot at expressing what I feel is wrong in the current tools:

Why won’t you cover 90% of use cases?

Supervision is hard, each production is different, and complex business logic must be tested, so indeed, a monitoring tool must be able to be extended easily, that’s a given and every supervision tool got this right. But why on earth should tests that every production will need be implemented as extensions ? Let’s take a look at the expected value which is the less intrusive way to check for a machine’s load average:

  • The nagios core engine determines that an snmp check must be run for a machine
  • Fork, execute a shell which execs the check_snmp command
  • Feed the right arguments to snmpget

You think I am kidding ? I am not. Of course each machine needing a check will need to go through this steps. So for as few as 20 machines requiring supervision at each check interval 60 processes would be spawned. 60 processes spawned for what ? Sending 20 udp packets, waiting for a packet in return. Same goes for TCP, ICMP, and many more.

But it gets better ! Want to check more than one SNMP OIDs on the same machine ? The same process happens for every OID, which means that if you have

Now consider the common use case, what does a supervision and graphing engine do most of its time:

  • Poll ICMP
  • Poll TCP - sometimes sending or expecting a payload, say for HTTP or SMTP checks
  • Poll SNMP

So for a simple setup of 20 machines, checking these simple services, you could be well into the thousands of process spawning every check interval per machine. If you have a reasonable interval, say 30 seconds or a minute.

Add to that some custom hand written scripts in perl, python, ruby - or worse, bash - to check for business logic and you end up having to sacrifice a large machine (or cloud instance) for simple checks.

That would be my number one requirement for a clean monitoring system: Cover the simple use cases ! Better yet, do it asynchronously ! Because for the common use case, all monitoring needs to do is wait on I/O. Every language has nice interfaces for handling evented I/O the core of a poller should be evented.

There are of course a few edge cases which make it hard to use that technique, ICMP coming to mind since it requires root access on UNIX boxes, but either privilege separation or a root process for ICMP checks can mitigate that difficulty.

Why is alerting supposed to be different than graphing ?

Except from some less than ideal solutions – looking at you Zabbix - Supervision and Graphing are most of the time two separate tool suites, which means that in many cases, the same metrics are polled several times. The typical web shop now has a cacti and nagios installation, standard metrics such as available disk space will be polled by cacti and then by nagios (in many cases through an horrible private mechanism such as nrpe).

Functionally speaking the tasks to be completed are rather simple:

  • Polling a list of data-points
  • Ability to create compound data-points based on polled values
  • Alerting on data-point thresholds or conditions – Storing time-series of data-points

These four tasks are all that is needed for a complete monitoring and graphing solution. Of course this is only the core of the solution and other features are needed, but as far as data is concerned these four tasks are sufficient.

How many times will we have to reinvent SNMP

I’ll give you that, SNMP sucks, the S in the name - simple - is a blatant lie. In fact, for people running in the cloud, a collector such as Collectd might be a better option. But the fact that every monitoring application “vendor” has a private non inter-operable collecting agent is distressing to say the least.

SNMP can rarely be totally avoided and when possible should be relied upon. Well thought out, easily extensible collectors are nice additions but most solutions are clearly inferior to SNMP and added stress on machines through sequential, process spawning solutions.

A broken thermometer does not mean your healthy

(LLDP, CDP, SNMP) are very useful to make sure assumptions you make on a production environment match the reality, they should never be the basis of decisions or considered exhaustive.

A simple analogy, using discovery based monitoring solutions is equivalent to saying you store your machine list in a DNS zone file. It should be true, there should be mechanisms to ensure it is true, but might get out of sync over time: it cannot be treated as a source of truth.

Does everyone need a horizontally scalable solution ?

I appreciate the fact that every one wants the next big tool to be horizontally scalable, to distribute checks geographically. The thing is, most people need this because a single machine or instance’s limits are very easily reached with today’s solutions. A single process evented check engine, with an embedded interpretor allowing simple business logic checks should be small enough to allow matching most peoples needs.

This is not to say, once the single machine limit is reached, a distributed mode should not be available for larger installations. But the current trend seems to recommend using AMQP type transports (e.g: which while still being more economic than nagios’ approach will put an unnecessary strain on singe machine setups and also raise the bar of prerequisites for a working installation.

Now as far as storage is concerned, there are enough options out there to choose from which make it easy to scale storage. Time-series and data-points are perfect candidates for non relational databases and should be leveraged in this case. For single machine setups, RRD type databases should also be usable.

Keep it decoupled

The above points can be addressed by using decoupled software. Cacti for instance is a great visualization interface but has a strongly coupled poller and storage engine, making it very cumbersome to change parts of its functionality (for instance replacing the RRD storage part).

Even though I believe in making it easy to use single machine setups, each part should be easily exported elsewhere or replaced. Production setups are complex and demanding, each having their specific prerequisites and preferences.

Some essential parts stand out as easily decoupled:

  • Data-point pollers
  • Data-point storage engine
  • Visualization Interface
  • Alerting

Current options

There are plenty of tools which even though they need a lot of work to be made to work together still provide a “good enough” feeling, amongst those I have been happy to work with:

  • Nagios: The lesser of many evils
  • Collectd: Nice poller which can be used from nagios for alerting
  • Graphite http://graphite.wikidot.com: Nice grapher which is inter-operable with collectd
  • OpenTSDB http://opentsdb.net: Seems like a step in the right direction but requires a complex stack to be setup.

Final Words

Now of course if all that time spent writing articles was spent coding, we might get closer to a good solution. I will do my best to unslack(); and get busy coding.


          Reply To: HELP: Editing post_parent directly in database not working.   

great – thanks for the update


          Crowdsourcing groundwater monitoring in the parts of Boston built atop wooden pilings sunk into landfill   

The Boston Sun reports on a pilot project by the Boston Groundwater Trust to use Bluetooth-enabled well caps and a mobile app to better monitor the levels of groundwater that keep the wooden pilings that support hundreds of buildings in the Back Bay, the South End, the Fenway and Beacon Hill from collapsing.

Many of the homes (and some larger structures, such as Trinity Church) in those neighborhoods sit on wooden pilings that have to be kept wet to keep them from being attacked by wood-munching microorganisms. In the LightWell project, the trust is installing special well caps that will update the water levels in the wells they cover once an hour - and let people with the app on their phones collect the data:

Digitally fabricated out of translucent Corian, the new well caps will house LED lights and a microcontroller that is connected to a depth sensor 30 feet below grade. The sensor will be reading and logging groundwater continuously. Every hour, it will provide a reading with a scrolling message by an LED matrix. Users will be able to use the free mobile app to get the reading and push it to an cloud storage database. This leverages the public's mobile phones to crowd-source the real-time data, turning their devices into tools of citizen science.

For now, the caps will be rotated among several blocks in the neighborhoods.


          Database Developer   

          jClub Acquires Assets of Choxi to Create Exciting New Online Shopping Destination   

NEW YORK, June 29, 2017 /PRNewswire/ -- jClub, a discount e-commerce retail store, announced it has acquired all of the assets, including the multi-million customer database, of Choxi.com Inc., an online shopping platform that declared bankruptcy in December 2016. The acquisition allows...



          Growing Pile of Data Shows That Voter Fraud Is a Real and Vast Problem   

This week, The Heritage Foundation is updating its Voter Fraud Database with 89 new entries, including 75 convictions and a slew of overturned elections and... Read More

The post Growing Pile of Data Shows That Voter Fraud Is a Real and Vast Problem appeared first on The Daily Signal.


          Structured and Sparse Canonical Correlation Analysis as a Brain-Wide Multi-Modal Data Fusion Approach   
Multi-modal data fusion has recently emerged as a comprehensive neuroimaging analysis approach, which usually uses canonical correlation analysis (CCA). However, the current CCA-based fusion approaches face problems like high-dimensionality, multi-collinearity, unimodal feature selection, asymmetry, and loss of spatial information in reshaping the imaging data into vectors. This paper proposes a structured and sparse CCA (ssCCA) technique as a novel CCA method to overcome the above problems. To investigate the performance of the proposed algorithm, we have compared three data fusion techniques: standard CCA, regularized CCA, and ssCCA, and evaluated their ability to detect multi-modal data associations. We have used simulations to compare the performance of these approaches and probe the effects of non-negativity constraint, the dimensionality of features, sample size, and noise power. The results demonstrate that ssCCA outperforms the existing standard and regularized CCA-based fusion approaches. We have also applied the methods to real functional magnetic resonance imaging (fMRI) and structural MRI data of Alzheimer’s disease (AD) patients (n = 34) and healthy control (HC) subjects (n = 42) from the ADNI database. The results illustrate that the proposed unsupervised technique differentiates the transition pattern between the subject-course of AD patients and HC subjects with a p-value of less than $1\times 10^{\mathrm {\mathbf {-6}}}$ . Furthermore, we have depicted the brain mapping of functional areas that are most correlated with the anatomical changes in AD patients relative to HC subjects.
          Data Entry Specialist   
MA-South Chelmsford, Talascend is currently seeking a Data Entry Specialist for a short term project located in Chelmsford, Massachusetts Responsible for transferring data into our new database system. Exhibit skills of: Self-directing work flow Detail oriented Ability to take initiative and to exercise judgement and management of priorities Demonstrated ability to operate computer systems and work within Microsoft Su
          Citrix Feature Requests; Database design and Provisioning Versioning   
The blog about an issue with Citrix director, got well received. I got saw a tweet from Citrix engineering they are looking into it. They might add a functionality to check for stale data. I think that is awesome and shows the commitment of the engineering team working on it. I really get a smile […]
          Using Performance Insights to Analyze Performance of Amazon Aurora with PostgreSQL Compatibility – #AWS Video   
Watch a step-by-step demonstration of how Amazon RDS Performance Insights analyzes performance of an Amazon Aurora (PostgreSQL) database instance. Learn more about Performance Insights and the PostgreSQL compatible edition of Aurora at http://amzn.to/2sjk1sj. Performance Insights is a database performance dashboard that helps you quickly assess performance of your relational database workloads, and tells you when […]
          Design, Deploy, and Optimize SQL Server on AWS – #AWS Online Tech Talks Video   
Enterprises are quickly moving database workloads like SQL Server to the cloud, but with so many options, the best approach isn’t always obvious. You exercise full control of your SQL Server workloads by running them on Amazon EC2 instances, or leverage Amazon RDS for a fully managed database experience. This session will go deep on […]
          Nutanix Announces NEW VM Migrations and Database Transformations for Simple Enterprise Cloud Adoption   
At Nutanix .NEXT 2017, Nutanix announced new virtual machine (VM) migration and database (DB) transformational capabilities, designed to help customers more easily assess, and more simply migrate or transform datacenter workloads into the Nutanix Enterprise Cloud. Whenever existing workloads require rehosting or replatforming, robust migration capabilities are needed to streamline and automate the process, minimizing […]
          Resource: Nevada Instant Atlas   
Interactive database that produces downloadable, customized maps and graphs using data from over 50 sources. Data can be compared by county, by urban versus rural region, or combined, allowing users to create visual representations using hundreds of health measures and indicators. -- University of Nevada, Reno School of Medicine
          Microsoft: Microservices == Microdatabases – Cloud Application Development   
I’ve been playing in the microservices conversation for quite a while now, and just wanted to callout a really nice codebase/walkthrough at https://github.com/dotnet-architecture/eShopOnContainers While we’re on the topic, the first question I often hear is “what is a microservice?” or “what is the difference between a microservice and SOA?” Seeing as how microservices doesn’t even have […]
          Random Friday Morning Thoughts   

  • In the last week I've had home grown tomatoes delivered to my office by two local attorneys. There appears to be a bumper crop in the county this year. (And that ends this Country Agricultural Minute.)
  • I ran into a former Bridgeport resident/former Texas House Rep in Brookshires in Bridgeport yesterday. (Wearing a Baylor visor, no less.) 
  • Something might be wrong with Fox 4's Steve Eagar. I might start a new game called "How long into the broadcast will it be before he stumbles over the teleprompter." (He's had to anchor the 10:00 p.m. newscast alone for about three days this week and it has not gone well.)
  • Fox Sports website has laid off a ton of sports writers this week (including the great Stewart Mandel) after announcing it will focus more on videos than the written word. That's stupid. That's Idiocracy.
  • I'm struggling in my fishing this year. What lure (including color) do you use when the water is incredibly clear?
  • The home page for each Texas appellate court is back up, but the back end database (which has the information that you go there to see in the first place) is still down. That makes three days in a row. Something is royally screwed up.
  • The NRA put out an ad which is a not-so-subtle call for an armed insurrection against the Dirty Lib.
  • "(CNN) - An Ohio city councilman has suggested a controversial solution to the growing opioid problem in his town: If an addict keeps overdosing, the city won't dispatch anyone to save their life." If you won't go out on the second or third time, why go out on the first?
  • A city in Iran, with 1.1 million people, set a world record in high temperature yesterday: 129 degrees. 
  • Loyal Christian and Deputy Press Secretary Sarah Huckabee Sanders defended Trump's offensive tweets yesterday by saying Trump shouldn't be considered a role model because, “When it comes to role models, as a person of faith, I think we all have one perfect role model." She followed that up with saying Trump "fights fire with fire." 
  • Former Texas Tech coach Mike Leach is still mad about not getting paid after his firing. (USA Today.) He was on The Ticket about a month ago with one of the conditions of the interview being that he got to rant that Tech, in his opinion, screwed him out of over $2 million. And, man, he went off. 
  • Speaking of, whatever happened to Craig James? He holds the distinction of getting Leach fired and losing to Ted Cruz in the same year. That's a bad year. 
  • You realize that they have no idea who the actual shooter and getaway driver were, right? 
  • I think I'm doubling down on my belief that the Bedford girl found in the Arlington landfill was a dumpster suicide. No arrests. No search warrants. No family outrage of "the police aren't doing anything!" I bet she left a note. 
  • Last night a rookie NY Yankee, Dustin Fowler, in the first inning of his first game suffered a gruesome "open ruptured patella tendon." I don't know what that is, and I'm not looking to find out.  
  • The Fort Worth PD Twitter account basically posts nothing but silliness with no real effort to seek the public's help in law enforcement. Then this morning they posted a video of a girl wearing a Peaster High School t-shirt and her friend taken from a Target security cam with the caption "she stole mascara." It even sought Peaster school's twitter account for help. It was quickly deleted and replaced with a different caption. I wonder if the girl wearing the T-shirt is not the one actually accused of theft but the original tweet, referencing Peaster, so implied. (Side note: Aren't there bigger crimes to post about?)
    The deleted tweet.
  • Replaced with this one.

  • Trump will leave for Trump National Golf Club at Bedminster, New Jersey this afternoon. That will make his 32nd day at a golf club as president.



          PHP Programmer   
Skilled India Placement Services - Lucknow, Uttar Pradesh - Developing features in PHP, Yii, Codeigniter, Laravel. Developing JavaScript, jquery plug-in. Database Architecture, CRM and other digital...
          (USA-MI-Kalamazoo) Assistant Controller   
Assistant Controller Robert Half Management Resources is seeking an Assistant Controller for a prestigious company in Kalamazoo. This position will require the consultant to roll up their sleeves and get into the details of the role. Email Adam Minor at adam.minor@rhmr.com. Assistant Controller Responsibilities * Complete month-end, quarter-end and year-end close including financial statements * Prepare and maintain account reconciliations/analyses for balance sheet and income statement accounts * Handle monthly financial executive package and control book to include budget to actual comparisons by department * Coordinate and lead external financial audits * Prepare external monthly and quarterly reporting packages and supply to Board meeting decks * Handle treasury function to ensure vendors are paid timely and cash forecasts are accurate * Provide technical accounting GAAP mentorship, external reporting and support to the Company * Maintain accounting and reporting policies * Assist in the development of the accounting team, including staffing, performance management, and training * Other responsibilities include, but are not limited to participating in special projects, system improvements or ad hoc analyses/projects as assigned Our industry-leading alliances and broad client network provide you greater access to a variety of unique interim and long-term project opportunities that can keep you continuously engaged. We also provide competitive benefits and compensation packages, as well as online training and continuing professional education (CPE). Our parent company, Robert Half, once again was named first in our industry on Fortune® magazine's list of "World's Most Admired Companies." (March 1, 2017) At Robert Half Management Resources, your experience matters - and we put it to good use. Apply with us today! All applicants applying for U.S. job openings must be authorized to work in the United States. All applicants applying for Canadian job openings must be authorized to work in Canada. © 2017 Robert Half Management Resources. An Equal Opportunity Employer M/F/Disability/Veterans Our industry-leading alliances and broad client network provide you greater access to a variety of unique interim and long-term project opportunities that can keep you continuously engaged. We also provide competitive benefits and compensation packages, as well as online training and continuing professional education (CPE). Our parent company, Robert Half, once again was named first in our industry on Fortune® magazine's list of "World's Most Admired Companies." (March 1, 2017) At Robert Half Management Resources, your experience matters - and we put it to good use. To apply for this position or for more information on other engagements, visit us online at roberthalfmr.com or call your branch office at 1.888.400.7474. All applicants applying for U.S. job openings must be authorized to work in the United States. All applicants applying for Canadian job openings must be authorized to work in Canada. © 2017 Robert Half Management Resources. An Equal Opportunity Employer M/F/Disability/Veterans By clicking 'Apply Now' you are agreeing to Robert Half Terms of Use. *Req ID:* 02220-9500627762 *Functional Role:* Controller - Assistant *Country:* USA *State:* MI *City:* Kalamazoo *Postal Code:* 49008 *Compensation:* DOE *Requirements:* The ideal Assistant Controller will have strong technology skills, including Microsoft Excel and PowerPoint, ERP systems experience, and database applications. A Bachelor's Degree in accounting or finance, or advanced degree or certification preferred. This Assistant Controller should have strong communication, technology, analytical and organizational skills. This Assistant Controller will preferably have five or more years of experience in accounting or finance, and public accounting experience is highly valued. To apply please email your resume to Adam at adam.minor@rhmr.com or call 616.774.3286. Robert Half Management Resources is the world's premier provider of senior-level accounting, finance and business systems professionals on a project or interim basis. We provide companies cost-effective project resource solutions and staff augmentation services. Operating from more than 150 offices worldwide we maintain a network of highly skilled accounting, financial and business systems professionals to assist with your toughest business challenges.
          (USA-MI-Kalamazoo) Talent Acquisition Recruiter, PGS   
This role can be based at any Pfizer US site, with significant PGS presence.Reporting to the Senior Manager, Talent Acquisition PGS Team, the Talent Acquisition Recruiter will be responsible for full life cycle recruiting to support the PGS organization. Will work closely with Hiring Managers and HR business partners to meet business hiring needs.Support full-life cycle recruitment for the PGS sites supported in accordance with global operating procedures and best practice principles, including sourcing, selection and offer development.•Support Kalamazoo recruitment and other smaller, PGS facilities as needed. In future may be asked to take on recruitment supporting other PGS sites as work volume fluctuates on the team.•Will take on special recruitment projects as needed to support the client group, and to grow professionally.•Become credible, trusted talent advisor to PGS client groups.•Ensure a positive client and candidate experience throughout the full recruitment life cycle.•Able to provide first level feedback for internal and external candidates.•Manage the offer process for internal and external candidates, including pre-employment•screening and offer development.•Ensure that hiring is consistent with the business goals and follow recruitment guidelines and processes to insure compliance.•Evaluate candidate backgrounds to match core competencies with key hiring requirements and assess motivational fit. Utilize appropriate selection techniques.•Effectively integrate diversity and veteran recruitment into the staffing process to ensure diverse candidate slates.•Develop an understanding of Pfizer's Benefits Program and its competitive advantage in the market place.•Achieve recruitment metrics against Global TA targets.Organizational Relationships with:HR BOS partnersPGS TA Team membersNA TA Team membersInternal and external candidatesQualifications•BS degree in Human Resources, Business, a related discipline; or equivalent professional work experience required.•Prior HR experience required, preferably in direct recruiter or Recruitment Coordinator role internally within Pfizer or in an external recruitment role hiring junior positions.•Must have demonstrated ability to develop solid working relationships with both hiring managers and all HR coworkers associated with your client groups.•Must have experience using large ATS database.•Must have the ability to prioritize work and manage multiple projects•Must have strong written and verbal communication skills•Prior recruitment experience supporting manufacturing operations or supply chain management preferred.•Ability to comply with local site regulations when working at a manufacturing facility.Sunshine ActPfizer reports payments and other transfers of value to health care providers as required by federal and state transparency laws and implementing regulations. These laws and regulations require Pfizer to provide government agencies with information such as a health care provider's name, address and the type of payments or other value received, generally for public disclosure. Subject to further legal review and statutory or regulatory clarification, which Pfizer intends to pursue, reimbursement of recruiting expenses for licensed physicians may constitute a reportable transfer of value under the federal transparency law commonly known as the Sunshine Act. Therefore, if you are a licensed physician who incurs recruiting expenses as a result of interviewing with Pfizer that we pay or reimburse, your name, address and the amount of payme
          (USA-MI-Kalamazoo) Quality Control Technician - Chemistry   
**Job Description** **Company Information** **About Us** Thermo Fisher Scientific Inc. (NYSE: TMO) is the world leader in serving science, with revenues of $18 billion and approximately 57,000 employees in 50 countries. Our mission is to enable our customers to make the world healthier, cleaner and safer. We help our customers accelerate life sciences research, solve complex analytical challenges, improve patient diagnostics and increase laboratory productivity. Through our premier brands – Thermo Scientific, Applied Biosystems, Invitrogen, Fisher Scientific and Unity Lab Services – we offer an unmatched combination of innovative technologies, purchasing convenience and comprehensive support. All of our employees share a common set of values - Integrity, Intensity, Innovation and Involvement. Our ability to grow year after year is driven by our ability to attract, develop and retain world-class people who will thrive in our environment and share in our desire to improve mankind by enabling our customers to make the world healthier, cleaner and safer. If you share in our values and if you're looking for an employer who is strongly committed to developing talent and rewarding achievement, come grow with us at Thermo Fisher Scientific. **Division Summary:** The Anatomical Pathology Division (APD) provides laboratories with the broadest portfolio of instrument and consumable solutions, from specimen collection and grossing to advanced staining and cover slipping. The Division generates $400MM in annual revenue and has 1,500 employees in 13 countries. The anatomical pathology product line includes Richard Allan Scientific, Erie Scientific, Microm, Shandon, and Lab Vision. **Position Summary:** The Lead Quality Control Chemistry Technician will assist in laboratory work for testing of raw materials, in-process samples (bulks / batches) and finished goods. The Lead QC Chemistry Technician will test samples using gas-chromatography (GC), UV-VIS spectrophotometry (UV-VIS), pH, Karl-Fisher titration, standard titration, and other analytical methods. The QC Chemistry Technician must document results with accuracy in electronic databases and in paper records. The Lead QC Chemistry Technician should be familiar with cGLP / cGMP / cGDP. He / she will partner with colleagues in the laboratory and in manufacturing to optimize production and to troubleshoot quality concerns. **Key Responsibilities:** + Follow site Standard Operating Procedures (SOPs) and Work Instructions (WI) + Analyze chemical raw material samples. + Analyze chemical bulk solution and finished goods samples for product release. + Maintain laboratory records (data integrity). + Maintain laboratory instruments. + Ability to work overtime and weekends to support manufacturing + Responsible for contributing to the continual quality & reliability improvement of APD products and services. + Ensuring policies, procedures and practices are in compliance with global quality & regulatory requirements and meet the needs of our customers & Quality Policy. + Responsible for performing tasks to support the quality system and quality policy as directed by QA/RA management. **Minimum Requirements/Qualifications:** + Bachelor’s Degree in a science related field (i.e. chemistry or biology) + Experience in an analytical chemistry laboratory preferred + Excellent communication and attention to detail + Ability to work independently and as part of a team, self-motivation, adaptability and a positive attitude + Must demonstrate strong organizational skills and be able to handle multiple assignments simultaneously + Must be willing to work with and around hazardous chemicals. + Must possess strong organizational skills. + Experience of working with FDA regulated products desired (Medical Device/IVD preferred) (pharmaceutical, dietary supplement or food experience is acceptable). + Knowledge of ISO13485 / FDA QSR 21 CFR Part 820 / 803 requirements preferred, 21 CFR Part 110, 111 or 211 is acceptable. + Excellent interpersonal skills + Excellent communication skills both written & oral + Excellent computer skills, particularly spreadsheets/graphical software tools (e.g. Excel) + Less than 5% travel (US & International) **Non-Negotiable Hiring Criteria:** + Bachelor’s Degree in a science related field (i.e. chemistry or biology) + Minimum 1 year of experience working in a laboratory setting Thermo Fisher Scientific is an EEO/Affirmative Action Employer and does not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability or any other legally protected status.
          (USA-MI-Kalamazoo) QO Specialist APRR   
This position is for a QO Specialist on the Annual Product Record Review (APRR) team within the Product Review and Investigation unit in Quality Operations. The QO Specialist responsibilities will include using various computer systems to pull the data for and author a subset of approximately 200 APRR reports written for products manufactured at the Kalamazoo site. The process includes the compilation and review of release data, stability data, deviations, regulatory and site changes, and validation activities; and, the summarization of that data into a concise report that is reviewed by the QO product professional and approved by production and quality management.•Query data from multiple systems to meet established due dates. Systems used include, but are not limited to: LIMS, SAP, APRR Product History and Analytical Reports, MBR, PDM and QTS. Numerous legacy systems are also queried for data.•Prepare the APRR report in accordance with established procedures•Work with the applicable QO colleagues for the review and revision of the APRR reports including entry into the electronic change control system.•May be assigned additional responsibilities as required, including: preparation of the APRR schedule, updates to procedures, audit support and training of other team members.•Adhere to strict deadlines and assignments in order to meet site quality metrics of 100% on-time delivery of all APRRs.•Work in a team environment to meet all team objectivesDegree: Bachelor's degree in scientific field (Chemistry, Biochemistry, or related science) preferably with 2 years related experience (reporting of data from computerized systems, data review, and technical writing.)Technical Skills:•Excellent organizational skills, and ability to handle changing deadlines. Must possess flexibility to respond to constantly changing conditions and priorities.•Good communication skills to deal with all levels of customers and partners, both internal and external.•Demonstrated experience and skills in documentation of scientific or quality information.•Knowledge and understanding of GMP requirements and DP manufacturing processes and operations, with the ability to propose appropriate corrective actions.•Demonstrated critical thinking, ability to pay close attention to detail and accuracy, and meet established due dates.•Ability to follow written and verbal directions and properly handle confidential documents.•Demonstrated ability to work both independently and in a team environment.•Proficiency in IT systems such as LIMS, Trackwise databases (QTS), PDM and associated reporting tools; and the use of Excel for evaluation of data, is desirable.•Self motivated and willing to learn new and changing responsibilities.•Must be able to sit at a computer for long periods of time.•Must be able to lift and move binders or potentially boxes, weighing approximately 25 lbs.Generally First shift, 8:00-4:30. Extended work hours or weekend work may be required.EEO StatementSunshine ActPfizer reports payments and other transfers of value to health care providers as required by federal and state transparency laws and implementing regulations. These laws and regulations require Pfizer to provide government agencies with information such as a health care provider's name, address and the type of payments or other value received, generally for public disclosure. Subject to further legal review and statutory or regulatory clarification, which Pfizer intends to pursue, reimbursement of recruiting expenses for licensed physicians may constitute a reportable transfer of value under the federal transparency law commonly known as
          (USA-MI-Marshall) Transportation Coordinator/Planner   
Transportation Coordinator/Planner Tracking Code 178092-846 Job Description Job Description The Transportation Coordinator contributes to and is responsible for the optimization of the transport flows and costs associated with Tenneco (North American inbound, outbound and expedited transport providers). The Transportation Coordinator also has responsibility for working with the Logistics Department coordinating the implementation of logistical Best Business Practices. The person in this role also performs analytical comparisons or evaluations of business critical projects; gathers, analyzes and benchmarks historical, current and proposed transportation trends for management. Duties and Responsibilities + Plans transportation flow for all suppliers inbound to site and outbound paid transportation + Collects, analyzes and issues proposed transportation rate and service proposal comparisons + Creation & maintenance of transportation dashboard distributed weekly + Supports of the financial, strategic and technical capabilities of the organization with freight cost analyses and reporting + Partners with internal and external operating units such as finance, commodity buyers, MIS, internal and external resources in supporting the development of logistics best practices + Maintains databases and reports for the logistics department, including: key performance indicators (KPIs), freight rate calculator, open orders, various freight invoice audits, transportation rate analyses and shipment validations + Provides project management support for logistics, including any custom applications + Provides interactive communication with distribution center, manufacturing plants, finance, purchasing and outside service providers, including vendors, and other support resource + Travel up to 10% Required Experience Required Skills + Strong PC skills; database manipulation and query; proficiency in MS Office + General understanding of SAP + Material planning knowledge a plus + Logistics knowledge of transport modes + Ability to work independently and in a team environment inside of a manufacturing facilityRequired Experience + We are an equal opportunity employer. Employment selection and related decisions are made without regard to gender, race, age, disability, religion, national origin, color, gender identity, sexual orientation, veteran status or any other protected class. + Minimum of a Bachelor’s degree in business, supply chain, or related concentration or equivalent work experience + Experience utilizing data to make improvements and present reports + Experience in fast-paced manufacturing environment + PREFERRED QUALIFICATIONS: + Automotive manufacturing logistics experience a plus + Strong PC skills; database manipulation and query; proficiency in MS Office + General understanding of SAP + Material planning knowledge a plus + Logistics knowledge of transport modes + Ability to work independently and in a team environment inside of a manufacturing facility Job Location Marshall, Michigan, United States Position Type Full-Time/Regular
          Episode #366 Part I: Superman Family Comic Book Cover Dated February 1964: World's Finest Comics #139!   
World's Finest Comics 139, February 1964!



Download Episode 366 Part I!

WORLD'S FINEST COMICS 139, February 1964, was published on December 12, 1963. It contained 32 pages for the cover price of 12¢. Jack Schiff was the editor, and the cover was pencilled by Jim Mooney, according to Mike's Amazing World Of DC Comics. The Grand Comic Book Database credits Dick Dillon as the penciller. Sheldon Moldoff was the inker and Ira Schnapp was the letterer.

- The fifteen page Superman/Batman story was titled, THE GHOST OF BATMAN, written by Dave Wood, pencilled by Jim Mooney and inked by Sheldon Moldoff. This story was reprinted in SHOWCASE PRESENTS: WORLD'S FINEST vol. II.

- Aquaman and Aqualad starred in the ten page story, THE DOOM HUNTERS, written by Jack Miller and drawn by Ramona Fradon. This story was reprinted in TEEN TITANS 35, September/October 1971, published on July 15, 1971. It was also reprinted in SHOWCASE PRESENTS: AQUAMAN vol. II.

Also highlighted in this episode are the issue's ads and other features.

At the beginning of the episode, during the MY PULL LIST, i review the June 2017 cover dated issues that were published in April 2017, which I received from Discount Comic Book Service.

Next Episode: SUPERMAN FAMILY COMIC BOOK COVER DATED FEBRUARY 1964 PART II: SUPERMAN'S GIRL FRIEND LOIS LANE 47!

Then we will cover: SUPERMAN FAMILY COMIC BOOKS COVER DATED MARCH 1964: PART I: SUPERMAN'S PAL JIMMY OLSEN 75 & PART II: WORLD'S FINEST COMICS 140! (This issue was Jack Schiff's final issue as editor before Mort Weisinger took over.)


The home for THE SUPERMAN FAN PODCAST is http://thesupermanfanpodcast.blogspot.com. Send e-mail to supermanfanpodcast@gmail.com.


You can join the SUPERMAN FAN PODCAST Group or Page on facebook, and follow the podcast on twitter @supermanpodcast. You can also keep track of the podcast on TumblrMediumFlipboard, the Internet Archive and Stitcher.

SUPERMAN FAN PODCAST is a proud member of the following:


- The SUPERMAN WEBRING of websites, and


The theme of this podcast is PLANS IN MOTION, composed by Kevin MacLeod, and part of the royalty free music library at http://incompetech.com.

Superman and all related characters are trademark and copyright DC Comics. Any art shown on this podcast is for entertainment purposes only, and not for profit. I make no claims of ownership of these images, nor do I earn any money from this podcast.

Thanks for listening to the SUPERMAN FAN PODCAST and, as always, thanks to Jerry Siegel and Joe Shuster, creators of Superman!

And don't forget to take care of each other out there!

          SpotCrime Weekly Reads   
Police involved shootings rulings questioned, hate crimes not being reported, Sessions and the Mary Jane debate continues, the fight over body cameras and police transparency, and more... 

POLICE CONDUCT

Perjury charge dropped against Texas trooper who stopped Sandra Bland (Houston Chronicle)

Charlotte's Citizens Review Board requests evidentiary hearing for Keith Scott case (TWCNews)

Jackson County jail guards took bribes to smuggle contraband, feds allege (KansasCity.com)

Three Chicago Cops Charged With Conspiracy to Cover Up Laquan McDonald Killing (NBC)

Officer used excessive force in several cases (Fox)

White St. Louis police officer shoots off-duty black officer (CBS)

More Than 100 Federal Agencies Fail to Report Hate Crimes to the FBI’s National Database (ProPublica)

CRIME RATE

Will Trump Use Science to Fight Crime? (TheCrimeReport.org)

STOPPING GUN CRIME: Revamped ballistics strategy targets Houston's serial shooters (Houston Chronicle) See Also: These 14 Facts Are Crucial to Understanding Gun Violence in America (TheTrace.org) And Also: Prof. Andrew Papachristos on Gun Violence and The Company You Keep (Vera.org)

When the Mailman Unwittingly Becomes a Drug Dealer (WSJ)

What Jeff Sessions Gets Wrong About Marijuana (Bloomberg)

Portland condemns apartment building because of frequent police visits (PressHearald.com)

‘NC is the only state where no doesn’t mean no’: Court case ruled women can’t back out of sex (NewsObserver.com)

POLICE TRANSPARENCY

How Body Cameras Affect Community Members’ Perceptions of Police (Urban.org)

The Right to Record and Police Accountability (CATO.org)

Bill should expand use of police body cameras in Pa., but limits release of video (PennLive.com) See Also: Opponents Say Police Body Cam Bill Hinders Access (Public News Service)


          Jeppesen Data Cycle 1711 Full World (06.2017) | 2.82 GB |   
Jeppesen Data Cycle 1711 Full World (06.2017) | 2.82 GB Descrion: Update of aeronautical databases Release date: 06/02/2017 Version: 1711 Developer: Jeppesen Developer's site: Jeppesen.com Language: English Tabletka: not required System requirements: Windows 7, 8 Coverage Area: Full World DOWNLOAD: http://nitroflare.com/view/1F2A167724ECF05/1711_PC.part1.rar http://nitroflare.com/view/E293D673694F488/1711_PC.part2.rar http://nitroflare.com/view/57CCA28CC3555CA/1711_PC.part3.rar
          Ecologist Builds Online Database Application Using Caspio, Saving Endangered Plants   

The Otay Mountain lilac is a California native plant that is continually threatened by wildfire and other man-made disruptions. It is among the more than five thousand rare plant species in the United States at risk for extinction. That’s why the Center for Plant Conservation (CPC) at San Diego Zoo Global is working toward conserving and [...]

The post Ecologist Builds Online Database Application Using Caspio, Saving Endangered Plants appeared first on Caspio Blog.


          Bradley McDougald could be very underrated FA signing for Seahawks   

The Seattle Seahawks made a very intriguing signing this offseason, bringing in Bradley McDougald from the Tampa Bay Buccaneers on one-year contract… The Seattle Seahawks added to their talented secondary this offseason by bringing in Bradley McDougald from the Tampa Bay Buccaneers. It was a very underrated move by the Seahawks, because McDougald is a […]

Bradley McDougald could be very underrated FA signing for Seahawks - NFL Mocks - NFL Mocks - 2016 NFL Mock Draft, NFL Draft, NFL Mock Draft Databases, Mock Drafts


          Dallas Cowboys linebacker Damien Wilson in line for bigger role   

The Dallas Cowboys have some intriguing depth at the linebacker position, and Damien Wilson could be in line for a big role in 2017… Many who aren’t overly close to the Dallas Cowboys beat like myself have spent much of the offseason rooting for a full recovery from Jaylon Smith, a former Notre Dame linebacker […]

Dallas Cowboys linebacker Damien Wilson in line for bigger role - NFL Mocks - NFL Mocks - 2016 NFL Mock Draft, NFL Draft, NFL Mock Draft Databases, Mock Drafts


          NFL: Philadelphia moves on from Dorial Green-Beckham   

Philadelphia officially moves on from the wideout who never put in the dedication to be the next big NFL receiver Dorial Green-Beckham has been officially been released by the Philadelphia Eagles after one season. Green-Beckham has always had major issues surrounding him, dating back to college when he was released from Missouri. Tennessee originally took a […]

NFL: Philadelphia moves on from Dorial Green-Beckham - NFL Mocks - NFL Mocks - 2016 NFL Mock Draft, NFL Draft, NFL Mock Draft Databases, Mock Drafts


          Arizona Cardinals putting big faith in Robert Nkemdiche   

The Arizona Cardinals are putting a lot of faith in second year defensive lineman Robert Nkemdiche, and there will be growing pains along the way… Robert Nkemdiche was probably feeling the weight of a giant chip on his shoulder when he lasted deep into the first round of the 2016 NFL Draft before he was […]

Arizona Cardinals putting big faith in Robert Nkemdiche - NFL Mocks - NFL Mocks - 2016 NFL Mock Draft, NFL Draft, NFL Mock Draft Databases, Mock Drafts


          Pittsburgh Steelers breakout player candidate – CB Artie Burns   

The Pittsburgh Steelers invested a first round pick in cornerback Artie Burns in 2016, and he leads their top breakout player candidates in 2017… The Pittsburgh Steelers have a rich tradition of stellar defensive play. With the offensive side of the ball taking the majority of the headlines in recent years, could cornerback Artie Burns […]

Pittsburgh Steelers breakout player candidate – CB Artie Burns - NFL Mocks - NFL Mocks - 2016 NFL Mock Draft, NFL Draft, NFL Mock Draft Databases, Mock Drafts


          Brett Favre wants to make a comeback to the NFL   

Hall of Fame quarterback Brett Favre may not be eligible to play in the NFL, but he could still make a big impact on the field One year after being inducted into the Football Hall of Fame, quarterback Brett Favre is ready for his next challenge. During an interview on ESPN Wisconsin, the Packers legend […]

Brett Favre wants to make a comeback to the NFL - NFL Mocks - NFL Mocks - 2016 NFL Mock Draft, NFL Draft, NFL Mock Draft Databases, Mock Drafts


          Miami Dolphins have high hopes for healthy Julius Thomas   

The Miami Dolphins have high hopes for tight end Julius Thomas, who is reuniting with former offensive coordinator Adam Gase… The Miami Dolphins are excited about the possibilities within their offense in 2017, and for good reason. They didn’t lose much by trading away Branden Albert, but they gained what they feel can be a […]

Miami Dolphins have high hopes for healthy Julius Thomas - NFL Mocks - NFL Mocks - 2016 NFL Mock Draft, NFL Draft, NFL Mock Draft Databases, Mock Drafts


          The 5 Greatest NFL Players From the State of Wyoming   

Embed from Getty Images The state of Wyoming is known best for it’s incredible scenery and history of great ranching. Certainly not the greatest NFL players ever. In fact only five other states have produced fewer pro players in total than them, and four of those states are considerably smaller. The other is Alaska. Not […]

The 5 Greatest NFL Players From the State of Wyoming - NFL Mocks - NFL Mocks - 2016 NFL Mock Draft, NFL Draft, NFL Mock Draft Databases, Mock Drafts


          The 5 Greatest NFL Players From the State of Nevada   

The state of Nevada is known more for its desert and its association with Las Vegas, the greatest gambling city in existence. Not the greatest NFL players. Nonetheless even the most unlikely territories can give rise to some pretty special athletes.It won’t be a surprise that this particular list is comprised mostly of more recent […]

The 5 Greatest NFL Players From the State of Nevada - NFL Mocks - NFL Mocks - 2016 NFL Mock Draft, NFL Draft, NFL Mock Draft Databases, Mock Drafts


          NFL: Oakland Raiders agree to terms with Gabe Jackson   

Oakland is on their way to leading the NFL in cap money spent this offseason One week after making quarterback Derek Carr the highest paid player in NFL history the Oakland Raiders are back at it with the checkbook. Offensive lineman Gabe Jackson has reportedly signed a new contract with the Raiders to remain one […]

NFL: Oakland Raiders agree to terms with Gabe Jackson - NFL Mocks - NFL Mocks - 2016 NFL Mock Draft, NFL Draft, NFL Mock Draft Databases, Mock Drafts


          upvote / downvote an article   

Hi,

I think I could put this together. Possibly using a custom field to keep the score, then some smd query type stuff to add or subtract one using a pair of linked icons. There are probably other considerations that mean it’s not that simple. How do these features normally stop people from cheating and hammering the database with votes. I guess the most effective would be an IP based delay between votes.

Any ideas?

Kind regards,
Mike


          Should God Open a Window: TimeSaga C8 (First of 5)   
Should God Open a Window TimeSaga: The Society Story Eight Chapter One The Japanese were not certain they wanted the world the database supplied with the name Forlithin. We found no sign of a portal. Those of the planet had bombed the world as close to oblivion as possible. It was now centuries later, so a lot of the radiation had decayed to below safe levels, but the planet provided no infrastructure to help the emigrants establish a new society. While they knew that there would be ...
          Database Workbench 5.3.2 released   
Upscene Productions is proud to announce the availability of the next release of the popular multi-DBMS development tool: ” Database Workbench 5.3.2″ This release includes a custom report writer, increased PostgreSQL support and a renewed stored routine debugger including full support for Firebird 3 Stored Functions and Packages. The change log for version 5.3.2 and 5.3.0 is available. “There is

          VSIX Extension Gallery for Visual Studio   
In a [previous article](/2017/05/vs-itemtemplates-wizards-and-vsix.html) I discussed how to create Item and Project templates and bundle them into their own VSIX installers. However, now that you have your VSIX installer the question becomes how do you distribute it to all your coworkers? It would be great if they could use the _Extensions and Updates_ manager that is already built into Visual Studio. That handles installing, uninstalling, searching and auto-updating extensions. Pretty neat! However the project you have is not suitable to have in a public VSIX repository like the [Visual Studio Marketplace](https://marketplace.visualstudio.com/vs).
The public marketplace is no place for internal company extensions unfortunately.

The public marketplace is no place for internal company extensions unfortunately.

You need to host your own private marketplace. **But how?** TL;DR; I built one # Private Extension Galleries Microsoft has [addressed this issue](https://blogs.msdn.microsoft.com/visualstudio/2011/10/03/private-extension-galleries-for-the-enterprise/) but only through documentaton. There are no concrete implementations available from Microsoft nor any others on how to host and serve these files. Neither is there a simple way to leverage the advanced features the Extension Manager provides (such as search, ratings and download counts). ## So what is available? Unfortunately the commercial offerings are incredibly limited. The main one [myget.org](https://www.myget.org) is purely online and regretfully not free. The popular [Nexus Repository](https://www.sonatype.com/nexus-repository-oss) by Sonatype dropped support for VSIX files in their latest version (v3). There are some [half-automated](https://github.com/garrettpauls/VSGallery.AtomGenerator) solutions out there, others [very manual](https://www.codeproject.com/Articles/881890/Your-Private-Extension-Gallery). The worst thing about most of the automatic offerings is that they require being run on an [existing webserver](http://blog.ehn.nu/2012/11/using-private-extension-galleries-in-visual-studio-2012/) (IIS, Apache, Ngix, etc) and require a relational database system to store data. So there really is no freely available out-of-the-box solution available. **Until now...** # Introducing vsgallery
The VS-Gallery running inside of Visual Studio's Extension Manager

The VS-Gallery running inside of Visual Studio's Extension Manager

With the current rise in popularity of [_the Microservice_](https://en.wikipedia.org/wiki/Microservices) I found it really disappointing that a simple click-to-run solution wasn't available to run a private Visual Studio Marketplace. I wanted something simple and self-contained that could be run without installing and configuring multiple other systems (such as a webserver and a database system). ## The VS Gallery solution Before I bore you with more text, go ahead and test [_vsgallery_](https://github.com/sverrirs/vsgallery) out. Just download the latest release and run the executable file. It is really that super simple. I promise! Download vsgallery **vsgallery** is a single executable file which acts as a complete self hosted extension gallery for Visual Studio 2010 and newer. It really is ultra simple to configure and run. You are up and running in a few seconds. All for the low low price of **FREE**! The whole system runs as a single self-contained executable and uses no database. All files and data are stored on the local file system which makes maintenance and backup super simple.
## Features * Fully featured Extension Gallery ready to use in Microsoft Visual Studio. * Counts downloads of extensions * Displays star ratings, release notes and links to project home pages * Offers a simple to use REST API to submit ratings and upload new VSIX packages * Atom and JSON feeds for available packages * It's FREE! # How to install into Visual Studio In Visual Studio ``` Tools > Options > Environment > Extensions and Updates ``` Add a new entry and copy in the URL of the main Microservice Atom Feed. > By default the URL is `http://YOUR_SERVER:5100/feeds/atom.xml`
Please consult [this MSDN document](https://msdn.microsoft.com/en-us/library/hh266746.aspx) for any further details and alternative options on how to install a Private Extension Gallery in Visual Studio. # How it works The microservice is configured via the `config.ini` file that sits in the same folder as the main executable. The `.vsix` files, along with their download counts and ratings data are stored in a subfolder of the main service executable `VsixStorage/` (this subfolder is configurable). This makes taking backups and moving the service between machines super easy as the root folder contains the entire Microservice state and data. ``` root-folder |--vsgallery.exe |--config.ini |--VsixStorage |--atom.xml |--First.vsix |--Second.vsix |--AndSoForth.vsix ``` Latest release # The vsgallery API The Microservice comes with a rich HTTP based API. You can plug the data and its functionality directly into your development portal or company intranet with minimal web programming. Even direct integration into your continuous integration platforms and communication pipelines such as #slack are possible. > The `vsix_id` required by many of the endpoints can be obtained by reading the `id` field in the feed endpoints. ### [GET] /feeds/atom.xml This is the main entry point for the VSIX feed and serves up the Syndicate-Feed compatible Atom file containing all available extensions on the server. **This is the URL endpoint that should be used in Visual Studio.** See [How to install into Visual Studio](#how-to-install) for more information. ### [GET] /api/ratings/{vsix_id} Retrieves the rating value and vote count for a particular VSIX package by its ID. ``` curl -X GET http://VSGALLERY_SERVER:5100/api/ratings/VSIX_ID ``` The return type is the following JSON ``` { "rating": 4.3, "count": 19 } ``` ### [POST/PUT] /api/ratings/{vsix_id} Submitting rating values for a particular VSIX package by its ID. The post payload should be just raw string and contain a single floating point value in the range between [0.0, 5.0]. The example below will post a rating of `3.5` stars to VSIX package with the id `VSIX_ID` ``` curl -X POST -H "Content-Type: text/plain" --data "3.5" http://VSGALLERY_SERVER:5100/api/ratings/VSIX_ID ``` ### [GET] /api/json JSON feed for the entire package catalog. Same data that is being fed through the atom feed but just in a handier JSON format. ### [POST/PUT] /api/upload This endpoint accepts form-data uploads of one or more .vsix files to the hosting service. The example below will upload the file `my.vsix` to the gallery server and propose a new name for it `renamed.vsix` (you can omit the filename param to use the original name) ``` curl -X POST --form "file=@my.vsix;filename=renamed.vsix" http://VSGALLERY_SERVER:5100/api/upload ``` To upload multiple files simply add more form elements. The example below uploads two VSIX files at the same time. ``` curl -X POST --form "file1=@my.vsix" --form "file1=@your.vsix" http://VSGALLERY_SERVER:5100/api/upload ``` # Closing So if you're searching for a simple solution for your internal, low traffic, extension gallery then please consider **vsgallery**. If you do try it out, please leave me feedback in the comments below. Peace! vsgallery
          Fact Checking Network Gets Backing From Soros, Radical Iranian Tycoon   

Good news. Your "facts" are about to be "checked" by George Soros.

If the whole fact checking paradigm that the media has blackmailed Facebook and Google into rolling into their results hadn't been sufficiently poisoned by naked partisanship and left-wing politics, the presence of the amateur embattled left-wing activists at Snopes, here comes the death knell for its credibility.

"Fact-checking has never been this important. Come define its future" is the Poynter headline. But its future has already been defined, the cheerful posting informs us, by its funders.

Thanks to $1.3 million in grant funding from the Omidyar Network and the Open Society Foundations, the IFCN can now expand its work. New initiatives will include an innovation fund to reward new formats and business models for fact-checking, an impact tracker to help evaluate and monitor the efficacy of this type of work, and a tool to turn the links fact-checkers use into a searchable database of trustworthy primary sources.

Everyone knows who radical leftist billionaire George Soros is.

Soros is the left's biggest radical sugar daddy. But somewhere up there, particularly for pro-terror sites, is Iranian tycoon Pierre Omidyar. 

Pierre Omidyar has financed a war on national security and Israel through anti-American sites such as The Intercept. 

When you understand that this is where the "international fact checking network" is getting its financing, you understand the kind of "fact checking" it will be doing. And whom you can expect to be doing it.

Facebook and Google's embedding of partisan left-wing "fact checking" sites is one of the greatest assaults on freedom of expression on the internet. And now the partisan sites are about to fall further into the fever swamps of left-wing extremism.

Congressional Republicans should call out Google and Facebook for their double standard in advocating Net Neutrality while pushing Opinion Bias.


          Re: Lakeville Grove Cemetery   
Hi Kristina, couple thoughts,

The small place Boyceville, Dunn, Wisconsin, is not too far north of the large city Minneapolis, Hennepin, Minnesota. Perhaps Gust went to Minneapolis because he needed to go to a hospital with more resources than his local hospital.

Since Gust used the surname Olson, it seems his sisters would too - although I agree that's not necessarily the case, and "Hilda Nelson" could possibly be his sister. Like Gust, many (maybe most) Scandinavian immigrants who came to the USA with their father adopted the father's patronymic as their own surname.

About this comment: "Nels Olson (Birth 1840, Death: Unknown), Karna Olson (Birth: 1841, Death: 1920)." You no doubt have updated information on this by now, also. For the record, according to Find a Grave, Karna was Karna Anderson. the mother's maiden name is found on a death record, so it is important to note she was not born Olson.

According to Find a Grave, Nels Olson died 19 December 1928 in Minneapolis, Hennepin, Minnesota. The informant on his death record was Mrs. J P Johnson. The Find a Grave creator concludes this "must be his daughter Anna."
http://www.findagrave.com/cgi-bin/fg.cgi?page=gr&GRid=82...

Would be useful to know who was the informant on the death record for Karna Anderson Olson when she died 19 November 1920.

Minnesota death records can be ordered through the Minnesota Historical Society website (as you obviously know, as you have the record for Gust). That website has a birth index, death index, and Minnesota state census index.

1899 immigration record (passenger manifest, arriving Ellis Island) that you found:
August Nilsson 28 born about 1871
Hilda Nilsson 18 born about 1881
Selma Nilsson 15 born about 1885

So these are the birth years to watch for if anyone helps you search for Hilda and Selma. The full birth date (day/month/year) and their mother's maiden name will help to make a solid id if an obituary or death record is found for a "Hilda" or a "Selma."

The Dalby Database may have additional nuggets if searched more thoroughly than I tried. Here on this link you see the burial of Karna Olson 1841-1920:
http://www.dalbydata.com/user.php?action=cemsearchresults

Here are the Hildas born 1881 that show up in Dalby Database (burials):
http://www.dalbydata.com/user.php?action=cemsearchresults

Here are the Selmas born 1885 that show up in the Dalby Database (burials):
http://www.dalbydata.com/user.php?action=cemsearchresults

Just examples - of possible clues.

Below, I think this is your candidate for Hilda:

1905 census in Hennepin County, Minnesota
Hilda Young (with family) Hilda is born about 1881 in Sweden

1910 US Census in Hennepin County, Minnesota
William Young 30
Hilda Young 29 born about 1881
Almeda Young 5
Ina Young 3

So, no answers, just a few ponderings....

A volunteer, not related
          Active Directory Administrator - (Boston)   
Job Description Job Description The successful candidate for this position will: Provide and maintain support for a robust and resilient infrastructure for DCMA's authorization and authentication requirements Maintain Support for the enterprise Active Directory environment and resolve any errors therein. Provide expertise on Active Directory integration and capacity planning May prepare and present management with reports on system availability, and communicate issues and recommended solutions in common terms to non-technical enterprise Active Directory stakeholders Function as a Senior Level Technical resource regarding Active Directory issues to messaging administrators, programmers, web developers, network security engineers, database analysts, field services technicians, network managers, and implementation teams Conduct Windows server administration Provide advanced trouble shooting of WSUS, DNS, DHCP, and IIS Diagnose and resolve production incidents in an analytical and methodical manner Build and maintain partnerships with agency and Active Directory support clients Develop, implement and update disaster recovery plans for supported systems Basic Qualifications Senior level experience managing large scale server environments Senior level experience troubleshooting server issues and diagnosing root cause of issue Knowledge of virtualization and server consolidation using VMware Virtual Infrastructure and associated tools. Must have in-depth experience in designing, managing, and supporting at a senior level: o Microsoft Active Directory infrastructure, including Hands-on experience administering Microsoft Active Directory o 2008/2012 in a multi-site and multi-domain organization o Microsoft WSUS infrastructure o ADFS infrastructure o DNS infrastructure o AD Replication Must be able to assess and review Enterprise server infrastructure, and take proactive measures to ensure continued stability, and assist in the development and/or revision of server based standards, guidelines and policies as determined by internal stake holders Must be able to Troubleshoot at a senior level issues with servers, server operating system and software, including experience troubleshooting issues in a high availability production environment, load balancers, disaster recovery and encryption Strong working knowledge of standards and protocols: TCP/IP, DNS, DHCP, WINS, SMTP, RPC, HTTPS; including knowledge of forest to forest trusts Scripting expertise on Windows Server 2008 – 2012 as well as knowledge of IIS and networking concepts, VPN’ s, etc. Must be willing to work on call and after hours to support Operations worldwide.
          Junior Software QA Engineer - (Boston)   
Job Description We are looking for a Junior Software QA Engineer for delivering high-quality financial software products for the LifeYield. This position would include activities such as regression test design, creating and executing automated test scripts, documenting defects for the development team, and monitoring testing activity. Responsibilities: * API, Functional, Systems and GUI testing * Define test execution strategy, regression test suites * Learn and apply project high-level and detailed business requirements * Draft test artifacts, including Test Plans, Business Test Scenarios and Requirements Traceability Matrices * Perform and document software tests and report problems into issue management system * Candidate will be required to learn new product features and build tests for those features, automate existing tests, and write system-level tests of database functionality in the areas of performance * Estimate, prioritize, plan, and coordinate testing activities while working on multiple projects.
          Systems Analyst:Guidewire Developer - (Webster)   
Systems Analyst Guidewire Developer Location a Webster MA Job Summary Are you looking for a position that offers advancement opportunities great benefits and recognition for a job well done Join MAFPRE Insurance MAPFRE Insurance is a forward thinking insurance company offering friendly service from over professionals focused on taking care of you and your family For decades MAPFRE Insurance has been protecting families and their possessions with quality insurance coverage and a strong commitment to service excellence Your Future Starts Here Systems Analyst Guidewire Developer Design develop and maintain complex application systems working both individually and within a team environment The Systems Analyst will ensure that department standards and quality control processes are upheld and company needs are satisfied Lead the decision making process with IT management and team members and make recommendations to effectively implement project requirements Job Requirements Education Bachelor s Degree or professional level of knowledge in a specialized field or equivalent related experience Experience years or Associates Degree equivalent plus years Knowledge Complete understanding and knowledge of industry practices standards and concepts within field of work Applies them to perform or lead work requiring extensive analytical business skills Decision Making Makes decisions related to a wide variety of situations within management limits Interprets guidelines and procedures applying judgment and discretion Decisions influence portions of a project client relationships and or expenditures Supervision Received Works independently under general supervision Work is reviewed for overall adequacy in meeting objectives Leadership Provides guidance and training to less experienced staff as needed Takes a lead role in group team or project Problem Solving Operations Direct Work Involvement Applies knowledge to determine solutions to complex problems with minimal direction Uses research and analysis to develop innovative and practical solutions which are consistent with organizational objectives Client Contacts Contacts other departments and or external organizations or parties frequently Contacts are primarily at or below upper management levels Represents organization on specific projects Communication may involve persuasion and negotiation Additional Knowledge Skills and Abilities Specific technical skill sets Guidewire Specifically in the domain of Policy Claims and Billing Java Developer with years experience and GOSU G Script Demonstrated mastery of the duties and responsibilities of the Programmer Analyst position as outlined in the job description Exceptional analytical decision making and problem solving abilities Proven ability to create complex system and database designs which are flexible efficient and maintainable Proven ability to recommend complex software solutions Experience with testing tools Experience with the company development platforms preferred Procedural development workflow analysis experience essential Knowledge of systems and the business functions of customer areas preferred Understanding of the business and technical elements of the insurance industry preferred Proven ability to provide accurate estimates for project deliverables Ability to coach and direct staff in a demanding technical environment Exceptional interpersonal communication and presentation skills both written and verbal Exceptional listening skills and negotiating and influencing skills Exceptional organizational and time management skills Ability to manage group dynamics facilitate effective team interaction and negotiate effectively Ability to multi task manage details and execute effective follow through Ability to develop quality contingency plans Thorough working knowledge of a System Development Methodology Proficiency in the use of desktop applications Demonstrated dependability in a highly dynamic environment MAPFRE is committed to recognizing our employees as our most valuable resource We know our employees are the foundation for our accomplishments Thata s why we offer so many opportunities to share in the success they help us achieve We are MAPFRE We are people who take care of people If you require an accommodation for a disability so that you may participate in the selection process you are encouraged to contact the MAPFRE Insurance Talent Acquisition team at talentacquisition mapfreusa com We are proud to be an equal opportunity employer INDEED Source: http://www.juju.com/jad/000000009qxk8m?partnerid=af0e5911314cbc501beebaca7889739d&exported=True&hosted_timestamp=0042a345f27ac5dc0413802e189be385daf54a16310431f6ff8f92f7af39df48
          Software Engineer - (Cambridge)   
About the role and the team This is an exciting opportunity to join a fast growing team in the heart of Kendall Square We provide multiplex assays to medical researchers around the world Software to design experiments analyze instrument data and present the results is an essential component of the product We are in the process of expanding the set of software tools that will be available to our customers This opportunity is for a software engineer to develop and support web based applications The candidate will work as part of a small team of software developers to deliver and support applications for in house scientists and business development team members as well as users at customer sites Our end users are typically wet lab biologists a key design goal is simplifying the user experience delivering sophisticated and high quality data analysis without need for scripting or programming on their part Applications include textual analysis of biological literature online database mining for assay interpretation and statistically based experiment design tools The tool http www fireflybio com portal search provides a flavor of our work About you You should have a demonstrated ability to work and think independently and creatively Our development projects are not delivered with detailed specifications but as user requests to be solved by whatever means most appropriate So you will be expected to understand and explore the design space of alternative solutions You should be able to recognize requests in the larger context of platform development and deliver special requests as part of a new general capability You will have a passion for writing elegant compact maintainable code If this sounds like you please read on and apply Minimum Qualifications Bachelora s Degree in Computer Science or an engineering scientific discipline years of extensive programming experience yearsa experience with designing building extending and maintaining mid scale software systems Preferred Qualifications Familiarity with object oriented development in Java data structures collections design patterns garbage collection multithreading Web application experience with at least one modern MVW Toolkit eg Django Rails NodeJS Spring in a modern programming language eg Java Python Ruby Scala JS A track record of finishing projects meeting deadlines and getting things done Proven ability to work independently and multitask Friendly positive self motivated and a team player Effective written and verbal communication skills Background in health care bioinformatics genetics genomics molecular biology Experience with regression testing developer documentation end user documentation and version control is expected Fundamental understanding of applied math and or statistics Proficiency in Scala or prior exposure to functional programming via a language such as Lisp or Scheme About Us Ever since when our founder Jonathan Milner started selling antibodies from the back of his bike Abcam has aimed to help scientific researchers make breakthroughs faster We now have offices and labs in the UK the US China and Japan and as we continue to grow we remain ambitious driven by our customersa success and their research needs Ita s our goal to provide a world standard in protein research tools technical support and delivery When you join Abcam youa ll join a global business with the passion and the vision to become the most influential company and best loved brand in life sciences Our culture is our key differentiator We believe in empowering individuals with responsibility given at an early stage The working environment is fun and fast paced collaborative and outcome focused with a strong customer focus In addition to competitive salaries we can offer an attractive flexible benefits package which includes share options a culture focused on wellbeing and opportunities for growth and development Abcam is an Equal Opportunity Employer and makes all employment decisions without regard to age national origin race ethnicity religion creed gender sexual orientation disability veteran status or any other characteristic protected by law Source: http://www.juju.com/jad/000000009jikil?partnerid=af0e5911314cbc501beebaca7889739d&exported=True&hosted_timestamp=0042a345f27ac5dc69354e46c76daa485f5433b1779459d32f1da485eef8e872
          Application Development Team - (Holyoke)   
Application Development TeamGRT Corporation is looking for three Java/Oracle specialists to deliver web based application for its client located in Holyoke, MA.Highly experienced Software Developer and Architect - to assist in the design of a next generation web application offering.Two Web Application Developers - to develop and support of all external/internal web related software and applicationsW2 tax term is required for all positions ResponsibilitiesArchitect - architect, design, and code using Spring and other open source technologies while investigate existing and new technologies to build new features and integration pointsDevelopers - code and support external/internal web related software and applications while assisting with the development of test conditions and scenarios. Collaborate with other team members to implement application features, including user interface & business functionalityQualificationsB.S. degree in Computer Science, or equivalent Minimum of 5+ years software development experienceExperience with UNIX operating system, services, and commandsExperience with J2EE, Hibernate, Spring and StrutsExperience with modern front-end Javascript libraries (jQuery)Experience with REST/JSON APIsExperience on application servers such as Apache Tomcat, JBOSS EAP 6.xHands on experience with JAX RS, JAXB, JMS, Spring 4Strong experience in Junit, Mockito, Spring-Test and automated testing in general is MUSTExperience creating/consuming Web servicesExperience with Testing frameworksStrong Experience working with databases -PL/SQLDemonstrates integrity and authenticity Additional QualificationsArchitectExperience in agile methodology5-10+ years' experience writing robust web applications with Spring Framework (Spring Boot, Spring security, Spring MVC, etc.) using JavaFamiliarity with GIT, CVS source code management toolDeveloper 1Java frameworks especially micro service architectureJava framework and messaging architectureDeveloper 2Experience with SalesForce APIStrong experience in Enterprise Application Integration patterns (EAI)If you are interested, please apply to the positions providing the following:Indicate position you applyYour current/desired compensationDay time phone number Authorization statusthomas.simpson@grtcorp.comRegards,Thomas SimpsonHR SpecialistGRT CorporationStamford, CT 06901Web: , J2EE, JSON, Spring 4, Hibernate, Struts, Jaxb, Jaxr, PL/SQL Source: http://www.juju.com/jad/000000009qi9id?partnerid=af0e5911314cbc501beebaca7889739d&exported=True&hosted_timestamp=0042a345f27ac5dc0413802e189be385daf54a16310431f6ff8f92f7af39df48
          Cyber Security Engineer - (Newton)   
Job DescriptionWant to work for a dynamic company that feels nice and compact but boasts the perks of companies several times its size? With its rapid growth and global nature, Octo may be the place for you! Octo Telematics, NA is seeking a Cyber Security Engineer to design, test, implement and monitor security measures for OctoA cents € (TM) s Systems.Responsibilities:A' . Analyze and establish security requirements for OctoA cents € (TM) s systems/networksA' . Defend systems against unauthorized access, modification and/or destructionA' . Configure and support security tools such as firewalls, anti-virus software, patch management systems, etc.A' . Define access privileges, control structures and resourcesA' . Perform vulnerability testing, risk analyses and security assessmentsA' . Identify abnormalities and report violationsA' . Oversee and monitor routine security administrationA' . Develop and update business continuity and disaster recovery protocolsA' . Train fellow employees in security awareness, protocols and proceduresA' . Design and conduct security audits to ensure operational securityA' . Respond immediately to security incidents and provide post-incident analysisA' . Research and recommend security upgradesA' . Provide technical advice to colleaguesQualifications:A' . Bachelor in Computer Science, Cyber Security or a related technical fieldA' . 5 years plus experience in Cyber SecurityA' . 5 years plus experience in Cyber SecuritySecurity Expertise:A' . Expertise in security technology with one r more product certification in (BlueCoat, Cisco, SonicWall, Damballa, IBM, Kapersky, MSAB, Microsoft AD, TippingPoint, F5, VMware)TCP/IP, computer networking, routing and switchingDLP, anti-virus and anti-malwareFirewall and intrusion detection/prevention protocolsSecure coding practices, ethical hacking and threat modelingWindows, UNIX and Linux operating systemsISO 27001/27002, ITIL and COBIT frameworksPCI, HIPAA, NIST, GLBA and SOX compliance assessmentsC, C++, C#, Java or PHP programming languagesSecurity and Event Management (SIEM)Desirable Security Certifications:A' . Security+: CompTIAA cents € (TM) s popular base-level security certificationA' . CCNA: Cisco Certified Network Associate - Routing and SwitchingA' . CEH: Certified Ethical HackerA' . GSEC / GCIH / GCIA: GIAC Security CertificationsA' . CISSP: Certified Information Systems Security Professional Company DescriptionOCTO NA is a global leader in software and data analytics for the insurance and auto markets, with over four million connected users worldwide and a vast database of 380 billion km of driving data.
          Senior Site Reliability Engineer - (Watertown)   
ID 2017-1880Job Location(s) US-MA-WatertownPosition Type Permanent - Full TimeMore information about this job:Overview: This role is based within our Global Technical Operations team. Mimecast Engineers are technical experts who love being in the centre of all the action and play a critical role in making sure our technology stack is fit for purpose, performing optimally with zero down time.In this high priority role you will tackle a range of complex software and system issues, including monitoring of large farms of servers in multi geographic locations, responding to and safeguarding the availability and reliability of our most popular services.Responsibilities: ResponsibilitiesContribution and active involvement with every aspect of the production environment to include:Dealing with design issues.Running large server farms in multiple geographic locations around the world.Performance analysis.Capacity planning.Assessing applications behavior.Linux engineering and systems administration.Architecting and writing moderately-sized tools.You will focus on solving difficult problems with scalable, elegant and maintainable solutions. Qualifications: RequirementsEssential skills and experience:In depth expertise in Linux internals and system administration including configuration and troubleshooting.Hands on experience with performance tuning of Linux OS (CentOS) in identifying bottlenecks such as disk I/O, memory, CPU and network issues.Extensive experience with at least one scripting language apart from BASH (Ruby, Perl, Python).Strong understanding of TCP/IP networking, including familiarity with concepts such as OSI stack.Ability to analyze network behaviour, performance and application issues using standard tools.Hands on experience automating the provisioning of servers at a large scale (using tools such as Kickstart, Foreman etc).Hands on experience in configuration management of server farms (using tools such as mcollective, Puppet, Chef, Ansible etc).Hands on experience with open source monitoring and graphing solutions such as Nagios, Zabbix, Sensu, Graphite etc.Strong understanding of common Internet protocols and applications such as SMTP, DNS, HTTP, SSH, SNMP etc.Experience running farms of servers (at least 200+ physical servers) and associated networking infrastructure in a production environment.Hands on experience working with server hardware such as HP Proliant, Dell PowerEdge or equivalent.Be comfortable with working on call rotas and out of hours working as and when required to ensure uptime of service's requirements.Desirable skills:Working with PostgreSQL database.Administering Java based applications.Knowledge working with MVC frameworks such as Ruby on Rails.Experience with container technology.Rewards: We offer a highly competitive rewards and benefits package including pension, private healthcare, life cover and a gym subsidization.
          Java/Microservices - (Ipswich)   
Hello, Principal Java/Microservices Software EngineersDuration : 6+ months contract to hireLocation : Ipswich, MARequirements:o Minimum 10 years of experience in specification, design, development, maintenance enterprise-scale mission critical distributed systems with demanding non-functional requirementso Bachelor's Degree in Computer Science, Computer Information Systems or related field of study. Master's Degree preferredo 8+ years of experience with SOA concepts, including data services and canonical modelso 8+ years of experience working with relational databaseso 8+ years of experience of building complex server side solution in Java and/or C#o 8+ years of experience in software development lifecycleo 3+ years of experience building complex solutions utilizing integration frameworks and ESBo Demonstrate strong knowledge and experience applying enterprise patterns to solving business problemsPreferred Qualifications:o Leadership experienceo Strong abilities troubleshooting and tuning distributed environments processing high volume of transactionso Familiarity with model driven architectureo Familiarity with BPM technologieso Experience with any of the following technologies: Oracle, MySQL, SQL Server, Linux, Windows, NFS, Netapp, Rest/SOAP, ETL, XML technologieso In depth technical understanding of systems, databases, networking, and computing environmentso Familiarity with NLP and search technologies, AWS cloud based technologies, Content Management systems, publishing domain, EA frameworks such as TOGAF and Zachmano 2+ years of experience building complex Big Data solutionso Excellent verbal, written and presentation skills with ability to communicate complex technical concepts to technical and non-technical professionalsRegards Pallavi781-791-3115 ( 468 )Java,Microservices,cloud,AWS,architect Source: http://www.juju.com/jad/000000009qiqw5?partnerid=af0e5911314cbc501beebaca7889739d&exported=True&hosted_timestamp=0042a345f27ac5dc0413802e189be385daf54a16310431f6ff8f92f7af39df48
          Progress DBA (L-3) - (Boston)   
Hi,Hope things are doing well at your end.I have an urgent permanent position of Progress DBA (L-3) open in MA. If this position suits you well, Please share below details along with your updated resume ASAP.Visa-StatusCurrent LocationJD is given below: Title: Progress DBA (L-3)Location: Boston, MAFull-Time only Job Description-To resolve tickets/escalations/incidents, through root cause analysis, in adherence to SLA, quality, process & security standards to ensure positive customer feedback and value creation.To ensure positive customer experience and CSAT through First Call Resolution and minimum rejected resolutions / Reopen CasesWork on value adding activities such Knowledge base update & management, Training fresher, coaching analysts & conducting interviews/participation in hiring drivesTo participate or contribute on EN business in creation of proposals to drive Service improvement plans.To adhere to quality standards, regulatory requirements and company policiesTo provide support for on call escalations and doing incident & problem managementTo independently resolve tickets & esnure that the agreed SLA of ticket volume and time are met for the team.Create and maintain Progress databases required for development, testing, Integration, QA and production usage.Performs the capacity planning required to create and maintain the databases. The DBA should work closely with system and storage administrator to meet Hardware, Storage and Operating system requirementsPerform ongoing analysis and tuning of the progress database.Install new versions of the progress database and its features and any other tools that access the progress databasePlans and implements backup and recovery of the progress database.Application code promotion and migrations of programs, database changes, reference data changes and menu changes through the development life cycle.Experience and knowledge in migrating code, database changes, data and menus through the various stages of the development life cycle.Implement and enforce security requirements for progress databases.Performs database re-organizations as required to assist performance and ensure maximum uptime of the database.Evaluate releases of progress database version products to ensure that the environment is running the products that are most appropriate.Planning is also performed by the DBA, along with the application developers and System administrators, to ensure that any new product usage or release upgrade takes place with minimal impact.Provides technical support to application development teams.
          Senior Principal Cloud Software Development Engineer- IaaS/ Bare-metal - (Burlington)   
Design, develop, troubleshoot and debug software programs for databases, applications, tools, networks etc.As a member of the software engineering division, you will assist in defining and developing software for tasks associated with the developing, debugging or designing of software applications or operating systems. Provide technical leadership to other software developers. Specify, design and implement modest changes to existing software architecture to meet changing needs.Duties and tasks are varied and complex needing independent judgment.
          Software Development Engineer in Test - Folio - (Ipswich)   
SkillsRequirements:5+ yrs Java & Object Oriented Design/ProgrammingImplementation of 1 or more production RESTful interfaces in a microservices model2+ yrs product implementation experience with databases, both SQL and NoSQL ? PostgreSQL specifically is a plus2+ yrs product implementation experience in a cloud computing environment ? AWS specifically is a plus3+ yrs experience using Agile and/or SAFePreferred Qualifications:CI/CD using (eg) Jenkins, Maven, GradleSCM - Git/GitHubTest Driven Development (TDD) and Automated Unit TestingDeveloping automated integration and acceptance testsAutomating UI testing (eg Selenium, Sauce Labs)Developing performance and load tests at high scale (eg JMeter)General HTTP knowledge including familiarity with cURL or similar toolsLinux ? general knowledge, shell scripting ? RedHat/Amazon Linux specifically is a plusVirtualization ? Docker, Vagrant, etc.Open Source Software ? general knowledge SW dev model, experience contributing toRAML, JSON, XMLJavaScript and related tools/frameworks ? Both client-side and server-side - React, Node.js, webpack, npm/yarn, etc.Security related experience ?SSO, OAuth, SAML, LDAP, etc.Logging/Monitoring/Alerting/Analytics ? SumoLogic, Datadog, collectd, SNMP, JMX, etc.Why the North Shore of Boston and EBSCO are great places to live and work!Here at EBSCO we will provide relocation assistance to the best and brightest people. We are 45 minutes outside of Boston just minutes from the beach in Ipswich, MA. Ipswich is a part of the North Shore and contains a wide variety of locally owned shops, restaurants, and farms.
          Machine Learning Style Transfer For Museums, Libraries, and Collections   

I putting some thought into some next steps for my algorithmic rotoscope work, which is about the training and applying of image style transfer machine learning models. I'm talking with Jason Toy (@jtoy) over at Somatic about the variety of use cases, and I want to spend some thinking about image style transfers, from the perspective of a collector or curator of images--brainstorming how they can organize, make available their work(s) for use in image style transfers.

Ok, let's start with the basics--what am I talking about when I say image style transfer?  I recommend starting with a basic definition of machine learning in this context, providing by my girlfriend, and partner in crime Audrey Watters. Beyond, that I am just referring to the training a machine learning model by directing it to scan an image. This model can then be applied to other images, essentially transferring the style of one image, to any other image. There are a handful of mobile applications out there right now that let you apply a handful of filters to images taken with your mobile phone--Somatic is looking to be the wholesale provider of these features

Training one of these models isn't cheap. It costs me about $20 per model in GPUs to create--this doesn't consider my time, just my hard compute costs (AWS bill). Not every model does anything interesting. Not all images, photos, and pieces of art translate into cool features when applied to images. I've spent about $700 training 35 filters. Some of them are cool, and some of them are meh. I've had the most luck focusing on dystopian landscapes, which I can use in my storytelling around topics like immigration, technology, and the election

This work ended up with Jason and I talking about museums and library collections, thinking about opportunities for them to think about their collections in terms of machine learning, and specifically algorithmic style transfer. Do you have images in your collection that would translate well for use in graphic design, print, and digital photo applications? I spend hours looking through art books for the right textures, colors and outlines. I also spend hours looking through graphic design archives for movie and gaming industry, as well as government collections. Looking for just the right set of images that will either transfer and produce an interesting look, as well as possible transfer something meaningful to the new images that I am applying styles to.

Sometimes style transfers just make a photo look cool, bringing some general colors, textures, and other features to a new photo--there really isn't any value in knowing what image was behind the style transfer, it just looks cool. Other times, the image can be enhanced knowing about the image behind the machine learning model, and not just transferring styles between images, but also potentially transferring some meaning as well. You can see this in action when I took a nazi propaganda poster and applied to it to photo of Ellis Island, or I took an old Russian propaganda poster and applied to images of the White House. I a sense, I was able to transfer some of the 1000 words applied to the propaganda posters and transfer them to new photos I had taken.

It's easy to think you will make a new image into a piece of art by training a model on a piece of art and transferring it's characteristics to a new image using machine learning. Where I find the real value is actually understanding collections of images, while also being aware of the style transfer process, and thinking about how images can be trained and applied. However, this only gets you so far, there has to still be some value or meaning in how it's being applied, accomplishing a specific objective and delivering some sort of meaning. If you are doing this as part of some graphic design work it will be different than if you are doing for fun on a mobile phone app with your friends.

To further stimulate my imagination and awareness I'm looking through a variety of open image collections, from a variety of institutions:

I am also using some of the usual suspects when it comes to searching for images on the web:

I am working on developing specific categories that have relevance to the storytelling I'm doing across my blogs, and sometimes to help power my partners work as well. I'm currently mining the following areas, looking for interesting images to train style transfer machine learning models:

  • Art - The obvious usage for all of this, finding interesting pieces of art that make your photos look cool.
  • Video Game - I find video game imagery to provide a wealth of ideas for training and applying image style transfers.
  • Science Fiction - Another rich source of imagery for the training of image style transfer models that do cool things.
  • Electrical - I'm finding circuit boards, lighting, and other electrical imagery to be useful in training models.
  • Industrial - I'm finding industrial images to work for both sides of the equation in training and applying models.
  • Propaganda - These are great for training models, and then transferring the texture and the meaning behind them.
  • Labor - Similar to propaganda posters, potentially some emotional work here that would transfer significant meaning.
  • Space - A new one I'm adding for finding interesting imagery that can train models, and experiencing what the effect is.

As I look through more collections, and gain experience training style transfer models, and applying models, I have begun to develop an eye for what looks good. I also develop more ideas along the way of imagery that can help reinforce the storytelling I'm doing across my work. It is a journey I am hoping more librarians, museum curators, and collection stewards will embark on. I don't think you need to learn the inner workings of machine learning, but at least develop enough of an understanding that you can think more critically about the collection you are knowledgeable about. 

I know Jason would like to help you, and I'm more than happy to help you along in the process. Honestly, the biggest hurdle is money to afford the GPUs for training the image. After that, it is about spending the time finding images to train models, as well as to apply the models to a variety of imagery, as part of some sort of meaningful process. I can spend days looking through art collection, then spend a significant amount of AWS budget training machine learning models, but if I don't have a meaningful way to apply them, it doesn't bring any value to the table, and it's unlikely I will be able to justify the budget in the future.

My algorithmic rotoscope work is used throughout my writing and helps influence the stories I tell on API Evangelist, Kin Lane, Drone Recovery, and now Contrafabulists. I invest about $150.00 / month training to image style transfer models, keeping a fresh number of models coming off the assembly line. I have a variety of tools that allow me to apply the models using Algorithmia and now Somatic. I'm now looking for folks who have knowledge and access to interesting image collections, who would want to learn more about image style transfer, as well as graphic design and print shops, mobile application development shops, and other interested folks who are just curious about WTF image style transfers are all about.


          Andy Wingo: it's probably spam   

Greetings, peoples. As you probably know, these words are served to you by Tekuti, a blog engine written in Scheme that uses Git as its database.

Part of the reason I wrote this blog software was that from the time when I was using Wordpress, I actually appreciated the comments that I would get. Sometimes nice folks visit this blog and comment with information that I find really interesting, and I thought it would be a shame if I had to disable those entirely.

But allowing users to add things to your site is tricky. There are all kinds of potential security vulnerabilities. I thought about the ones that were important to me, back in 2008 when I wrote Tekuti, and I thought I did a pretty OK job on preventing XSS and designing-out code execution possibilities. When it came to bogus comments though, things worked well enough for the time. Tekuti uses Git as a log-structured database, and so to delete a comment, you just revert the change that added the comment. I added a little security question ("what's your favorite number?"; any number worked) to prevent wordpress spammers from hitting me, and I was good to go.

Sadly, what was good enough in 2008 isn't good enough in 2017. In 2017 alone, some 2000 bogus comments made it through. So I took comments offline and painstakingly went through and separated the wheat from the chaff while pondering what to do next.

an aside

I really wondered why spammers bothered though. I mean, I added the rel="external nofollow" attribute on links, which should prevent search engines from granting relevancy to the spammer's links, so what gives? Could be that all the advice from the mid-2000s regarding nofollow is bogus. But it was definitely the case that while I was adding the attribute to commenter's home page links, I wasn't adding it to links in the comment. Doh! With this fixed, perhaps I will just have to deal with the spammers I have and not even more spammers in the future.

i digress

I started by simply changing my security question to require a number in a certain range. No dice; bogus comments still got through. I changed the range; could it be the numbers they were using were already in range? Again the bogosity continued undaunted.

So I decided to break down and write a bogus comment filter. Luckily, Git gives me a handy corpus of legit and bogus comments: all the comments that remain live are legit, and all that were ever added but are no longer live are bogus. I wrote a simple tokenizer across the comments, extracted feature counts, and fed that into a naive Bayesian classifier. I finally turned it on this morning; fingers crossed!

My trials at home show that if you train the classifier on half the data set (around 5300 bogus comments and 1900 legit comments) and then run it against the other half, I get about 6% false negatives and 1% false positives. The feature extractor interns sequences of 1, 2, and 3 tokens, and doesn't have a lower limit for number of features extracted -- a feature seen only once in bogus comments and never in legit comments is a fairly strong bogosity signal; as you have to make up the denominator in that case, I set it to indicate that such a feature is 99.9% bogus. A corresponding single feature in the legit set without appearance in the bogus set is 99% legit.

Of course with this strong of a bias towards precise features of the training set, if you run the classifier against its own training set, it produces no false positives and only 0.3% false negatives, some of which were simply reverted duplicate comments.

It wasn't straightforward to get these results out of a Bayesian classifier. The "smoothing" factor that you add to both numerator and denominator was tricky, as I mentioned above. Getting a useful tokenization was tricky. And the final trick was even trickier: limiting the significant-feature count when determining bogosity. I hate to cite Paul Graham but I have to do so here -- choosing the N most significant features in the document made the classification much less sensitive to the varying lengths of legit and bogus comments, and less sensitive to inclusions of verbatim texts from other comments.

We'll see I guess. If your comment gets caught by my filters, let me know -- over email or Twitter I guess, since you might not be able to comment! I hope to be able to keep comments open; I've learned a lot from yall over the years.


          TNR Global Launches Search Application for Museum Collections   
We use open source search technology that works with most museum software systems and databases including the popular museum software product PastPerfect.
          Compare Data Entry Jobs-National Data Entry Reviews   
Have you wondered about data entry roles from home? Have you often wondered what to look out for? Well here are at least 4 things to look out for that may signal a problem. Get more about national data entry at our site. They are ; charges for working, outrageous advertising, poor name and no leadership. Get more about national data entry at our site. These things should be avoided like the plague and any that fall under these categories shouldn't be asked for. It requires a little time to get started with winning jobs on the net. You need to make sure that the site from which you are taking work is legitimate. Get more about national data entry at our site. This is important since it is possible for a good looking website to turn out to be a scam posing as a site for data entry roles on the web. Get more about national data entry at our site. Check anti-fraud sites to determine the site you are preparing to register with does not show up in their lists.



Best Legit Data Entry Jobs That Pays: Featured on CNN Money!
Check out my National Data Entry Review review at my site! - Don't forget, you can also get 50% - 75% off for a limited time so go now, you'll be sorry if you miss it!

National Data Entry: Our data entry personnels constantly makes about $1,500- $6,500 a month taking surveys and helping with various data entry jobs provided by companies all over the world. Find out more from our members on how they do it, and see their reviews on the legitimate data entry companies

Normally, the type of work that you get on the net is in the area of report preparation, correspondence, database preparation, survey responses and similar work. Data entry roles online are accurate. In some cases, two individuals are given the work of entering info. The completed work is immediately checked with the work of the second individual. If your output is revealed to be inaccurate, the work gets sent back to you for revision. When looking at data entry jobs from home, you need to bear in mind that valid firms won't charge you a fee, for set up or otherwise to telecommute doing data entry. Get more about national data entry at our site.

They guarantee you an exorbitant amount of cash by doing this type of work. However [*COMMA] be warned that they cannot back up their claims and many have been unprofitable by subscribing to this train of thought. You can make a pleasant sum, but exorbitant claims should make you run the other way and not look back. This is sometimes called fraudulent advertising and has lead to plenty of folks grief and lost money. Another thing to know is that data entry jobs online are lengthy. Get more about national data entry at our site.You are probably going to find yourself with eight to 10 hours of work. Cut offs are precisely to be followed. Get more about national data entry at our site. This work is moneymaking when you have a giant buyer base with steady work flow. Related Articles on national data entry: High Paying Data Entry Jobs-Best Data Entry Jobs Online

This can take a little while to expose. Get more about national data entry at our site. If a search through the BBB doesn't turn up anything, be extremely wary about the company. If they have a file, the information contained will give you some idea of how this company is operated and the reputation of the company. If there are black marks, then don't sign up with that particular company. Now this one is a genuine danger sign! When you're making an attempt to work with any data entry roles from home, one warning sign is lack of or no leadership. Some questions you should ask when deciding : Who will your immediate supervisor be? Who does he report to? What is the chain of command for this job? Best Home Based Data Entry: Featured on CNN Money!

          Trace Any Cell Number - The Quickest and Easiest Method!   
Each time your mobile rings and you leave what ever you're doing and run to answer it, if it's a prank caller or a telemarketer the kind of thought that goes through your mind needs no explanation But if this happens everyday that's when you really feel like tracing the cell number and giving it's owner a piece of your mind! In fact now you can! All you have to do is simply access an Online Database specialized in Reverse cell phone look up!
          List of Upcoming Telugu Movies of 2017 & 2018 : Release Dates Calendar for all New Telugu Films   

List of Upcoming Telugu Movies of 2017 - 2018 : Release Dates Calendar for all New Telugu Films

Here is List of all latest 2017 & 2018 Telugu Movies and all information of South Indian Films released. A complete update of all New releasing and upcoming Telugu films of Year 2017, 2018 and 2019.

Check out the latest listing of South Indian (Tollywood) movies released in 2017 and 2018 With His Actors, Actress and Release Dates information at MT Wiki Movie Database!
List of Telugu films of 2017 wiki, Upcoming Telugu Movies 2017, 2018 Calendar and Release Dates wikipedia, Tollywood New Movies  list
Release Dates of Telugu Movies in 2017, 2018 & Upcoming Tollywood Films List

Below are List of Upcoming Telugu Movies in 2017, 2018 and 2019

Release Date and Movie Name

Lead Star Cast (Actor & Actress)

12th January, 2017
Gautamiputra SatakarniNandamuri Balakrishna, Shriya Saran
Shatamanam BhavatiSharwanand, Anupama Parameswaran, Prakash Raj, Jayasudha
13th January, 2017
Khaidi No. 150Chiranjeevi, Kajal Aggarwal, Tarun Arora, Raai Laxmi
25th January, 2017
BalamHrithik Roshan, Yami Gautam, Sonu Sood, Ronit Roy, Girish Kulkarni
26th January, 2017
Yamudu 3(S3)Suriya, Anushka Shetty, Shruti Haasan, Vivek, Radha Ravi
BoganJayam Ravi, Arvind Swamy, Hansika Motwani, Akshara Gowda
LuckunnoduManchu Vishnu, Hansika Motwani, Tanikella Bharani, Raghu Babu
10th February, 2017
EzraPrithviraj Sukumaran, Priya Anand, Tovino Thomas, Sudev Nair, Vijayaraghavan
Om Namo VenkatesayaNagarjuna Akkineni, Anushka Shetty, Pragya Jaiswal, Saurabh Raj Jain, Jagapathi Babu
17th February, 2017
The Ghazi AttackRana Daggubati, Taapsee Pannu, Kay Kay Menon
Nenu LocalNani, Keerthy Suresh, Naveen Chandra, Sachin Khedekar
Kittu Unnadu JagrathaRaj Tarun, Anu Emmanuel, Arbaaz Khan
ChitrangadaAnjali, Sindhu Tolani, Raja Ravindra, Sakshi Gulati, Sapthagiri
24th February, 2017
GunturoduManchu Manoj Kumar, Pragya Jaiswal, Kota Srinivasa Rao
WinnerSai Dharam Tej, Rakul Preet Singh, Jagapati Babu, Mukesh Rishi
Babala BagothamBharat Babu, Rami Reddy, Shefali Sharma
Pilichina Palukavu O JavaralaJasmitha, Kiran Rathod
3rd March 2017
Navvula SandadiBabloo, Krishnudu, Nisha
Commando 2Vidyut Jamwal, Adah Sharma, Esha Gupta
10th March 2017
Premalo MugguruArjun Sarja, Cheran, Vimal
Jai BhavaniMeera Jasmine
SubhodayamKuppili Srinivas, MS Narayana, Shefali
HormonesAnuya Reddy, Diksha Panth, Santhosh
NesthaniviPavan Kumar, Posani Krishna Murali, Priyanka
Swapna SundariChandu, Pratishta, Ranjitha
AntheNagendra Babu, Nisha, Vijay Bhaskar
SwarnamanjiliAmruthavalli, Vishnu
AndhhagaduRaj Tarun, Rajendra Prasad, Heebah Patel, Ashish Vidhyarthi, Raja Ravindra
17th March 2017
SaindhavuduSrikanth Meka
Paisalo ParamatmaBaladitya, Mounika, Rajiv Kanakala
Vayasukanthamn.a
Tholi PataKota Srinivasan Rao, Krishna Vasa, Madhavi Latha
DuetKarthi, Aditi Rao Hydari, Delhi Ganesh, Vipin Sharma, Shraddha Srinath
24th March 2017
December 31stAbhinayashree, Chalapathi Rao, Jeeva
O ManasaAjay Manthena, Aparna Bajpai, Tejesh Naidu
DayaDev Gill, Gayathri Iyer, Nandamuri Tarakaratna
ContractAmeesha Patel, Arjun Sarja, J D Chakravarthy, Mallika Sherawat, Minissha Lamba
Prithvi IASAvinash, Parvathi Menon, Puneeth Rajkumar
Baburao NinnodalaJagapati Babu
OxygenGopichand, Anu Emmanuel, Abhimanyu Singh, Jagapati Babu, Sayaji Shinde
29th March 2017
Katama RayuduPawan Kalyan,Shruti Haasan
31st March 2017
Bullet RaniPriyanka Kothari, Ashish Vidhyarthi, Ravi Kale
Alexander IPSAnnapoorna, Dr Gidi Kumar, Prameena Naidu
GuruVenkatesh, Ritika Singh
7th April 2017
Chilipi Allarilo Chinni AasaSrikaran, Jasmitha
KeshavaNikhil Siddharth
SivalingaLawrence Raghavendra, Ritika Singh, Radha Ravi, Bhanupriya, Oorvasi
13th April 2017
LukkunnoduManchu Vishnu,Hansika Motwani, Tanikella Bharani, Vennela Kishore, Posani Krishna Murali
14th April 2017
RudhiramJagapati Babu
Sabaash NaiduKamal Haasan, Shruti Haasan, Ramya Krishnan, Brahmanandam
MisterVarun Tej, Hebah Patel, Lavanya Tripathi
ColorsGundu Hanumantha Rao, Siyana, Varun
Love AttackRia Achappa, Suman, Vamsi Krishna
Mohini JaganmohiniJayalalitha, Narasimha Raju, Rami Reddy
21st April 2017
DevDiganth Manchale, Charmi Kaur, Nathalia Kaur, Ananth Nag
28th April 2017
Bahubali 2: The ConclusionPrabhas, Tamannaah Bhatia, Rana Daggubati, Anushka Shetty, Ramya Krishna, Sudeep
AlalluRekha, Suresh, Vijay Sai
MondoduSrikanth Meka
5th May 2017
Cool Boys Hot GirlsPraveen
Neeli KalluAkanksha, Richa Panai, Suman
12th May 2017
Andamaina RojuluMadhu A Naidu
Donga PremaAngel Singh, Jai Akash, Kavitha
Nuvve Naa HeroSumanth Ashwin
KumbakonamBrahmaji, Raghunatha Reddy, Rami Reddy
Ayomayam ApartmentSai Sirisha, Satya Saagam, Vijay Bhaskar
19th May 2017
GaaliAjay Rathnam, Bhuvaneshwari, Vindhya
Maha ShankaruduSridhar, Sudharani
PlanMamtha, Rami Reddy, Saikiran
CriminalsNisha Kothari, Akhil Karthik, Ravi Babu, Uttej, Vijay Sai
26th May 2017
Black MoneyJagapati Babu, J.D. Chakravarthi
After DrinkMadhulagna Das, Reshmi, Sri Keerthi
GaadpuJ D Chakravarthy
Chandamama RaavePriyal Gor
RingsJohnny Galecki, Aimee Teegarden, Alex Roe
2nd June 2017
Rowdy SimhaSudeep, Saloni Aswani, Anant Nag
Gowtham NandaGopichand, Hansika Motwani, Catherine Tresa
ElevenNavdeep, Srikanth
Autowala ZindabadDharmavarapu Subramanyam, Prachee Adhikari, Saikiran
Sri Sai SankalpamKrishna Bhagavan, Kalpana Chowdary, Padmini
9th June 2017
Kaarulo ShikarukelitheIshika Singh, Dheeru Mahesh, Suresh Reddy, Jabardasth Phani, Sudharshan Reddy
VadhaVamsi Krishna
Feelingsn.a
Natho NenuJai Akash
Intintaa AnnamayyaSanam Shetty, Revanth, Ananya, Ananya
YuvatejamS.Srinivasulu, Megana
11th June 2017
Patel S.I.RJagapati Babu
16th June 2017
Hyderabad Love StoryRahul Ravindran, Reshmi Menon, Sija Rose, Dhanraj Sukhram
Ninu Chusina KshananaAnish Tejeswar, Barbie Chopra, Brahmanandam
23rd June 2017
Duvvada Jagannadham (DJ)Allu Arjun, Pooja Hedge
SambhavamiMahesh Babu, Bharath, Rakul Preet Singh, S. J. Surya
Simple Love StoryAmitha Rao, Karthik, Anvika Rao, Bhanu Chander
30th June 2017
SnehabandhamAli, Chitram Seenu, Suhas
VaishakamHarish Kumar, Avanthika Mishra
7th July 2017
2000 Crore Black Moneyn.a
Chalo One Two ThreeKeerthi, Saikiran, Sivaji Raja
DigbandhanaDhee Srinivas, Sravani, Dhanraj Sukhram, Nagineedu Vellanki
14th July 2017
Bombayi MitaiDisha Pandey, Niranjan Deshpande, Bullet Prakash, Chikkanna, Dimple Kapadia
Agent GopiRavi Babu
Vadevvadaina Sare Nenu ReadyAnnapoorna, Kiran Khan, Pujasri
Naaku Inko PerundiG. V. Prakash Kumar, Anandhi, Karunas, VTV Ganesh
NakshatramSundeep Kishan, Sai Dharam Tej, Pragya Jaiswal
21st July 2017
Entha Varaku Ee PremaJiiva, Kajal Agarwal, Bobby Simha, Sunaina, Rajendran
VasthavamDivya Nagesh, Krishnudu, Richard Rishi
I Hate UJayant
4th August 2017
Premaku SyeIndra Nag, Nikita, Kallu Chidambaram
11th August 2017
BhagmatiAnushka Shetty, Unni Mukundan, Aadhi Pinisetty
VashamKrishneswar Rao, Vasudevrao, Sweta Varmaa, Nanda Kishore, Gayatri Bhargavi
Raju Gari Gadhi 2Nagarjuna Akkineni, Seerat Kapoor, Samantha Ruth Prabhu
18th August 2017
Pakado PakadoAryan Rajesh, Kanja Karuppu, Monika Singh
21st August 2017
Touch Chesi ChuduRavi Teja, Raashi Khanna
25th August 2017
Raja The GreatRavi Teja, Mehreen Pirzada
Pellam HatyaSuhasini, Ramakrishna, Rithika
LechipodhamaParimal, Sandhya, Suman Shetty
01st September 2017
Ala Jarigindi Oka RojuKrishnudu, Tanu Roy
03rd September 2017
VoterManchu Vishnu, Surabhi
8th September 2017
RadhaSharwanand, Lavanya Tripathi
10th September 2017
Jai Lava KusaN. T. Rama Rao Jr., Raashi Khanna, Niveda Thomas
22nd September 2017
Life Ki Flat Form TeenageRaghava, Uma
18th October 2017
2.0(Robot 2)Rajinikanth, Akshay Kumar, Amy Jackson, Adil Hussain, Sudhanshu Pandey
01st December 2017
RampachodavaramAnushka Shetty
22nd December 2017
Kathalo RajakumariNara Rohith, Namitha Pramod
Upcoming Movie In 2017
Goutham NandaHansika Motwani
Katha NayaganVishnu, Catherine Tresa
VenkatapuramRahul, Mahima Makwana
DwarakaVijay Deverakonda, Pooja Jhaveri, Prakash Raj, Prabhakar
Nene Raju Nene MantriRana Daggubati, Kajal Aggarwal, Catherine Tresa
AndagaduRaj Tarun, Hebah Patel, Rajendra Prasad
AngelHebah Patel,Naga Anvesh, Saptagiri
FidhaVarun Tej, Sai Pallavi
Upcoming Movie In 2019
Jagan Mohan IPSNayantara, Gopichand, Raghu Babu, Kota Srinivasa Rao
Auto JaaniNayantara, Chiranjeevi
Also Check Out
List of Tamil Upcoming Movie
Top 10 Telugu Movies of All Times
Top 10 Telugu Actress of 2016 by Salary
Ravi Teja Upcoming Movies
Vijay Upcoming Movies
Nayantara Upcoming Movies

          How to Find Someone's Name From a Cell Number - This is Undoubtedly the Best Method!   
Today the Internet is crowded with Reverse Cell Phone Directories. People often seek help from such databases...
          Tracing Cell Numbers - 2 Tips to Identify a Reliable and Accurate Database!   
Once again the internet comes to the rescue and provides an answer to a problem that (according to statistics) inflicts 1.2 million citizens of UK. They've all got a good enough reason to have a mobile number traced...
          The Best Reverse Cell Phone Lookup - Why You Must 'Pick and Choose' - Important!   
Databases for Reverse Cell Phone lookup are a common sight on the internet. The service comes in handy when you need to identify an anonymous caller or if you need to make a check on someone's identity or numerous other reasons. You can use a database quickly and easily from work, the library or home, basically, anywhere that you have a computer with an Internet connection.
          Coal Plants, Anchor, Ubuntu, More: Friday Afternoon Buzz, June 30, 2017   
NEW RESOURCES Mining Review: New Urgewald database reveals world’s biggest coal plant developers. “Previous in-depth research completed by Urgewald played a key role in initiating the coal divestment actions of the Norwegian […]
          (USA-TN-Chattanooga) Medical Office Specialist   
Exciting opportunity to join the nation's largest provider of healthcare services! We offer an excellent benefits package, competitive salary and growth opportunities. We believe in our team and your ability to do great work with us, so we are excited to say we have an amazing tuition reimbursement program to help you further your career with our organization. The HCA Physician Services Group (PSG) is the physician and practice management solution for the Hospital Corporation of America (HCA). We manage a collection of highly motivated healthcare professionals and innovative leaders who are committed to excellence in every aspect of their career. To check out more about us please visit: http://hcahealthcare.com/about/ The Medical Office Specialist is a key member of our Primary Care Practice who helps create the best first impressions for our office and makes patients feel welcome. We are an amazing team who work hard to support each other and are seeking a great addition to the team who feels patient care is as important as we do! DUTIES INCLUDE BUT ARE NOT LIMITED TO: * Handling all scheduling concerns including appointment reminders. * Answering multiple telephones and accurately documenting messages * Forwarding telephone calls appropriately and following up on return calls * Checking-in patients and properly documenting registration * Insurance verification and verification of patient demographics * Filing medical records and patient/administrative files * Retrieving medical records and delivering to appropriate providers or department * Collecting co-pays and cash from patients, getting authorization on credit cards * Entering charges, payments, and balancing the day in the computer EXPERIENCE * Less than one year of experience is required; however, one to three years of experience is preferred. * Strong preference for prior experience in an Internal Medicine, Primary Care, and/or Family practice or setting. * Proficiency using scheduling software as well as a patient tracking database is required. We are actively interviewing! Apply now! **Title:** *Medical Office Specialist* **Location:** *Tennessee-Chattanooga-Parkridge Medical Group East Ridge* **Requisition ID:** *09374-58508*
          (USA-TN-Chattanooga) Sr. Tax Accountant   
The position will be responsible for timely preparation and analysis of schedules utilized in the preparation of income tax returns and related GAAP and statutory financial reporting\. **Job Description:** **Job Duties & Responsibilities** + Analyze accounting processes related to tax + Perform special projects and research as needed + Prepare general ledger account reconciliations + Prepare and post journal entries to the general ledger system + Prepare schedules used in the preparation of the consolidated federal income tax return; state income/franchise tax returns for multi\-state companies; and federal not\-for\-profit tax returns + Prepare schedules used in preparation of tax provision for interim and annual financial statements pursuant to GAAP and statutory accounting standards including footnote disclosures to the GAAP and statutory financial statements + Prepare schedules used in calculating quarterly estimated tax payments and annual tax return extensions + Maintain tax depreciation system and prepare work papers for multiple companies + Researches and analyzes accounting transactions\. Addresses the technical applicability, implements adherence to the guidance and assists others in implementing and understanding tax accounting guidance **Job Qualifications** **Education** + Bachelor’s degree in accounting or related field required\. **Experience** + Minimum 3 years work experience in a tax role of a corporate tax department or 2 years’ experience in a public accounting firm required **Skills/Certifications** + Solid PC skills with spreadsheet emphasis + Must possess careful attention to detail + Ability to prioritize work and meet reporting deadlines + Ability to establish and maintain working relationships with both internal and external customers \(i\.e\. BCBSA Association, regulatory authorities, internal and external auditors\) + Ability to comprehend and produce written technical communication + Basic understanding of financial/accounting theory + Basic knowledge and application of federal and multi\-state tax laws, regulations and procedures involved in various types of federal and state taxation + Basic tax research experience utilizing internet resources and on\-line tax databases \(e\.g\. RIA Checkpoint, BNA, CCH\) **Job Specific Requirements:** BBEX 10 AEP **Number of Openings Available:** 1 **Worker Type:** Employee **Worker Sub\-Type:** Regular **Company:** BCBST BlueCross BlueShield of Tennessee, Inc\. BCBST is an Equal Opportunity employer \(EEO\), and all employees and applicants will be entitled to equal employment opportunities when employment decisions are made\. BCBST will take affirmative action to recruit, hire, train and promote individuals in all job classifications without regard to race, religion, color, age, sex, national origin, citizenship, pregnancy, veteran status, sexual orientation, physical or mental disability, gender identity, or membership in a historically under\-represented group\. **BlueCross BlueShield of Tennessee is not accepting unsolicited assistance from search firms for this employment opportunity\. All resumes submitted by search firms to any employee at BlueCross BlueShield of Tennessee via\-email, the Internet or any other method without a valid, written Direct Placement Agreement in place for this position from BlueCross BlueShield of Tennessee HR/Talent Acquisition will not be considered\. No fee will be paid in the event the applicant is hired by BlueCross BlueShield of Tennessee as a result of the referral or through other means\.** **Tobacco\-Free Hiring Statement** To further our mission of peace of mind through better health, effective 2017, BlueCross BlueShield of Tennessee and its subsidiaries no longer hire individuals who use tobacco or nicotine products in any form in Tennessee and where state law permits\. A tobacco\-free hiring practice is part of an effort to combat serious diseases, as well as to promote health and wellness for our employees and our community\. All offers of employment will be contingent upon passing a tobacco/nicotine test\. An individual whose test result is positive for tobacco/nicotine will be disqualified from employment and the job offer will be withdrawn\. Individuals who fail the tobacco/nicotine screening will be permitted to reapply for employment after 6 months, if tobacco/nicotine\-free\. Resources to help individuals discontinue the use of tobacco/nicotine products include smokefree\.gov or 1\-800\-QUIT\-NOW\. About Us As Tennessee's largest health benefit plan company, we've been helping Tennesseans find their own unique paths to good health for over 65 years\. More than that, we're your neighbors and friends – fellow Tennesseans with deep roots of caring tradition, a focused approach to physical, financial and community good health for today, and a bright outlook for an even healthier tomorrow\.
          How to Trace a Cell Phone Number to Its Owner - You Do Not Want to Bark Up the Wrong Tree! Warning!   
Reverse Cell Phone Lookup - Think DATABASE! If you're here reading up on reverse cell phone traces you're probably looking to catch a prank caller, a cheating spouse, or a swindler who's avoiding your calls. Such traces have even helped the cops catch the bad guys on occasion.
          How to Find Someone's Name From a Cell Phone Number - 3 Deficiencies You Must Avoid - Beware!   
People often have doubts whether their spouses are being faithful and they have to opt for Reverse Cell Phone Lookup to confirm their suspicions. It's a simple process where you just type in the number in question and search on an online database. Different services offer different facts including the following...
          Tracing a Cell Phone Number - The Trickiest Part is Selecting the Right Database - 3 Tips   
One of the main reasons why people choose Reverse Cell Number Lookups is when they think their partner is being unfaithful to them. It's easy as copying the doubtful numbers and typing them on the database. According to the service you decide on, it could tell you the following information for each number.
          Senior Database Administrator - Aderas, Inc - Reston, VA   
Internet Explorer, Adobe Acrobat Reader X, ActivClient CAC, ActivCard Gold for CAC - PKI, ForgeRock Open AM Java EE Policy Agent, Tivoli Client, Veritas Volume...
From Aderas, Inc - Thu, 01 Jun 2017 21:05:26 GMT - View all Reston, VA jobs
          Information Assurance Specialist -- - IOMAXIS - United States   
Familiarity with remote access tools, system and network logging tools (ArcSight, Netwitness), workstation, application &amp; database serve scanning tools...
From IOMAXIS - Thu, 22 Jun 2017 08:30:44 GMT - View all United States jobs
          Specialist, Public Relations - Indigo Books & Music - Toronto, ON   
Ongoing development and maintenance of Indigo public relations media and blogger database. Build and maintain strong, positive relationships with media teams...
From Indigo Books & Music - Thu, 22 Jun 2017 03:09:44 GMT - View all Toronto, ON jobs
          Director of Recruiting - Jefferson Dental Clinics - Dallas, TX   
Maintains candidate tracking database for all candidates with notes on communication method, dates contacted, and level of candidate’s interest - ensuring...
From Jefferson Dental Clinics - Tue, 02 May 2017 22:20:11 GMT - View all Dallas, TX jobs
          Assoc Applications Developer - Kent State University - Kent, OH   
Java, XML , CSS , HTML , SQL and PHP . Appropriate development language(s), operating systems, reporting tools, relational database design and maintenance,... $42,752 - $58,751 a year
From Kent State University - Tue, 02 May 2017 16:42:32 GMT - View all Kent, OH jobs
          Entera Upgrades to Support Big Data with Latest Version of NXTera   

NXTera 6.5 adds Hadoop and Hive database framework support.

(PRWeb May 13, 2015)

Read the full story at http://www.prweb.com/releases/2015/04/prweb12680865.htm


          eCube Systems Announces Complete 64 bit Support in NXTera 64   

New Version of NXTera 6.4 makes it possible for legacy Entera applications to access the new 64 bit databases and architectures on Windows, Unix and Linux.

(PRWeb January 23, 2014)

Read the full story at http://www.prweb.com/releases/2014/nxtera64/prweb11500749.htm


          New Eclipse Tools for OpenVMS and Oracle RDB Certified by eCube Systems to Work with NXTware Remote   

Database Management Tool Simplifies RDB for Administrators and Developers Using Eclipse and NXTware Remote

(PRWeb November 12, 2013)

Read the full story at http://www.prweb.com/releases/openvms_rdb/nxtware_remote/prweb11267901.htm


          How the 2017 Arkansas legislature made life worse for you   
But it wasn't as bad as it could've been at the Capitol.

Arkansas's legislators were locked and loaded when they arrived for the 91st General Assembly this year, determined to get more guns into public places and take away voting and abortion rights, their evergreen attacks.

Thanks to the legislature, concealed weapons soon may be carried just about everywhere except Razorback games and the University of Arkansas for Medical Sciences. Unemployment benefits were cut, whistleblowers were silenced and charter schools were given advantages over regular public schools. Other legislation was symbolic but ugly, such as an act authored by Rep. Brandt Smith (R-Jonesboro) that aims to stop Sharia, or Islamic ecclesiastical law, from taking over Arkansas's court system.

Some of the silliest bills went nowhere, such as efforts by Sen. Jason Rapert (R-Conway) to wipe Bill and Hillary Clinton's names off the Little Rock airport, to indefinitely delay implementing the voter-approved medical marijuana program and to call a convention of the states to amend the U.S. Constitution to ban same-sex marriage. Anti-immigrant legislation that would have penalized colleges and cities with so-called "sanctuary" policies withered in committee. Rep. Smith, the sponsor of the bill targeting universities, warned that rogue professors might hide undocumented immigrants in their offices and then dump their human waste on campus in the dark of night; surprisingly, this argument did not persuade his colleagues. Rep. Kim Hendren (R-Gravette) proposed banning cell phones from public schools; later, he filed a bill prohibiting teachers from using books authored by leftist historian Howard Zinn. Neither gained traction.

What was good? A little. Conservatives tried to circumscribe the medical marijuana amendment with bans on smoking and edible products, among other roadblocks, but the worst of the anti-pot legislation stalled. Evidently reassured by Governor Hutchinson's promises to make the private option more conservative (read: stingier) down the line, the annual appropriation for Medicaid passed without a major fight — a relief for the 300,000-plus Arkansans receiving health insurance through Obamacare. Pushed by Hutchinson, the ledge directed some of Arkansas's tobacco settlement proceeds to expand a waiver program for the developmentally disabled, opening the door to services for some 500 to 900 desperate families stranded for years on a waitlist. At long last, the state will stop its reprehensible practice of celebrating Robert E. Lee's birthday simultaneously with Martin Luther King Jr. Day, a symbolic but important step forward that was championed by the governor.

Here's our survey of the damage:

GUNS
In Glock we trust

The biggest gun-related news this session was the passage and signing of House Bill 1249, now Act 562, which creates a new "enhanced carry" permit that will allow gun owners who have undergone eight hours of additional training — including active shooter training, with a curriculum still to be worked out by the Arkansas State Police — to carry a concealed handgun in many places previously forbidden under the state's concealed carry law, including the state Capitol, public colleges and universities, bars, churches and courthouses. Concealed carry in prisons, courtrooms and K-12 schools is still forbidden, and private property owners, including bars, churches and private colleges, can still prohibit firearms if they choose.

Sponsored by Rep. Charlie Collins (R-Fayetteville), the bill was a far piece from where it started by the time it was signed. Originally, Collins' bill would have solely mandated that public universities and colleges allow faculty and staff to carry concealed handguns. It was an attempt to push back against the state's public colleges and universities, which have steadfastly rejected Collins' and his colleagues' attempts to institute "campus carry" in the past. Amendments to HB 1249 soon pushed it several clicks further toward the broad "guns everywhere" approach favored by the National Rifle Association, and far beyond a potential shooting iron in a well-trained professor's briefcase. Now, anyone with the enhanced permit will be able to carry on a college campus, including into sometimes-contentious student and faculty disciplinary hearings and raucous college dorms.

The passage of the bill spawned some last minute scrambling when the Southeastern Conference expressed concerns about fans coming to college football games carrying heat, resulting in Act 859, a cleanup effort that prohibits concealed carry in college athletic venues. Also exempted by Act 859 were daycares, UAMS and the Arkansas State Hospital, an inpatient facility for the mentally ill. The bill also allows private businesses and organizations to ban concealed carry without posting a sign to that effect. If a private business decides to ban concealed carry without posting a sign, anyone caught carrying a concealed weapon on the premises can be ejected or told to remove their gun if they want to come back. If the concealed carrier repeats the infraction, they can be charged with a crime. Even after the purported cleanup, that still leaves a lot of places open to concealed carry unless those places set a policy forbidding the practice, including most hospitals, mental health facilities and off-campus high school and middle school sporting events. At the signing ceremony for HB 1249, Chris Cox, executive director of the NRA's Institute for Legislative Action, said, "We believe that if you have a legal right to be somewhere, and you're a law-abiding person, you ought to have a legal right to defend yourself." For the NRA, that means the right to be armed everywhere, any time, as long as you don't have a criminal record. Notice Cox didn't say anything about pesky permits or training.

Speaking of law-abiding persons, also of concern when it comes to concealed carry is Act 486. Under the law, the Arkansas State Police is now prohibited from establishing or amending any administrative rule that would revoke or suspend a concealed carry permit unless the holder of the permit was found to be in violation of a criminal offense. While not penalizing a person if they haven't committed a crime sounds like a good idea, the problem is that people can and do go off the rails for a multitude of reasons, many of which have nothing to do with a violation of the criminal code. Before the passage of Act 486, the State Police had broad latitude to revoke or suspend concealed carry permits for a number of reasons, including serious alcohol and drug abuse, dangerous mental illness, or a mental health professional's determination that a permit holder might be a threat to himself, his family or the public. With the passage of Act 486, though, a concealed carry holder who suffers a complete mental breakdown to the point of visual hallucinations can keep on packing right until the moment he or she is admitted at the State Hospital (thanks Act 859!), even if the person's family or a doctor asks the State Police to pull their permit. Ditto with people suffering from substance abuse issues, elderly dementia patients and those who hint they might be capable of suicide or homicide. Under the law, a permit can still be revoked or suspended if the person is caught carrying into a prohibited place like a courtroom or jail, but as seen above, the list of places where handguns are prohibited is dwindling by the year. Otherwise, thanks to Act 486, we just have to wait until that person commits a crime. By then, it's too late.

In the What Could Have Been column, we have HB 1630, by Rep. Clarke Tucker (D-Little Rock), which would have created the misdemeanor offense of "negligently allowing access to a firearm by a child" if an owner failed to secure a loaded gun or left it in a place a child could easily access. Though the bill had exemptions for hunting, sport shooting and use of firearms on a farm and had a sliding scale of penalties, with incidents involving the death or serious injury of a child at the top of the list, it went nowhere.

EDUCATION

Traditional schools took licks, but the worst was kept at bay.

The single worst education bill passed in 2017 was probably Act 542, sponsored by Alan Clark (R-Lonsdale), which requires school districts to sell or lease "unused or underutilized" facilities to competitor charter schools. Charters already had right of first refusal in the event a district decides to sell a building — but after Act 542 goes into effect this summer, a charter can force a district to sell or lease a building, even if the district doesn't want to do so. If a different entity — a nonprofit, say, or a clinic or a business — wants to buy an unoccupied school building instead, that's too bad. Act 542 requires a district to hold on to unused buildings for two years, just in case a charter comes along and wants the facility for itself.

Clark pointed to a situation a few years ago in which the Helena-West Helena School District refused to sell a vacant elementary to KIPP Delta, a charter. But there are good reasons why a district wouldn't want to hand over an asset to a direct competitor: Charter networks tend to weaken districts by bleeding away higher-performing students and public money, and they often enjoy advantages their traditional public school counterparts do not. As some opponents of the bill pointed out, the new law is tantamount to forcing Walmart to sell a store to Target. That's why school superintendents across the state fought the bill and convinced no small number of Republicans to join Democrats in opposing it. In the end, though, it passed the House on a 53-32 vote. Republican legislators also rejected proposals by Democrats Sen. Joyce Elliott and Rep. Clarke Tucker — both from Little Rock, which is seeing unchecked charter growth at the expense of traditional public schools — to impose fairer rules on charters.

Thankfully, the legislature turned down an even worse proposal. HB 1222 by Rep. Jim Dotson (R-Bentonville) proposed a convoluted scheme to divert millions of dollars away from the public coffers (by means of a tax credit to wealthy donors) and toward private schools in the guise of "education savings accounts" to be used for student tuition. A school voucher plan in all but name, the bill would have been devastating to public education. Dotson eventually scaled back the legislation to a pilot program with a four-year sunset, allowing a Senate version of the bill to win passage in that chamber — but many Republicans remain fond of their local school districts, and it narrowly failed in the House.

Meanwhile, legislators expanded an existing voucher program, the Succeed Scholarship. Created in the 2015 session, it uses public tax dollars to pay private school tuition for a limited number of K-12 students with special needs. Parents are required to waive their child's civil rights protections under the federal Individuals with Disabilities Education Act. In the past, the scholarship was open only to kids with an Individualized Education Program, or IEP; now, foster children living in group homes will also be eligible, thanks to Act 894 by Rep. Kim Hammer (R-Benton). Act 327 by Rep. Carlton Wing (R-North Little Rock) will allow a nonaccredited private school to participate, as long as the school has applied for accreditation. And, the appropriation for the Succeed Scholarship rose from $800,000 to $1.3 million — an increase of 63 percent — potentially allowing as many as 200 students statewide to participate.

That bump is especially notable alongside the meager 1 percent increase in the state's overall K-12 education budget for the next two years — far less than the 2.5 percent boost recommended by legislative staff tasked with determining what constitutes "adequate" school funding. A bit more money will be directed to teacher pay and special education, and pre-kindergarten will see an overdue $3 million increase, so the money situation could be worse. Still, with state revenue squeezed hard by tax cuts, and private and charter schools knocking at the door, traditional public schools are clearly not the General Assembly's top priority.

On other fronts, school legislation was a mixed bag. Elliott's Act 1059, will limit the use of out-of-school suspensions and expulsions for students in grades K-5 — a much-needed reform — but her bid to end corporal punishment failed in committee. (Rural Arkansas still loves the paddle.) One of the better education bills to pass this session was Elliott's Act 1039 which gives teeth to a 2013 law (also by Elliott) requiring dyslexia screening and intervention. Its reporting requirements and enforcement mechanism hopefully will force districts to deliver better reading interventions to dyslexic students. A major accountability bill developed by the state Education Department, Act 930, will overhaul how schools are monitored by the state, though it's too soon to say how the changes will play out. Act 478 by Rep. Bruce Cozart (R-Hot Springs), will require high school students to pass a civics test before graduating; an attempt by Rep. John Walker (D-Little Rock) to impose the same requirement on legislators and state agency heads received a cold reception. A bill by Rep. Mark Lowery (R-Maumelle), now Act 910, will end September school elections and require them to be held concurrent with the November general or spring primary election date. That could spell trouble for future millage votes.

Finally, there's higher education: "Campus carry" dominated the news, but a major change in funding may be just as consequential. Act 148, which originated with the governor's office, creates a funding formula for colleges and universities that ties state money to metrics like graduation rate. HB 1518, now Act 563, a worthy bill by Rep. James Sturch (R-Batesville) requires the Arkansas Higher Education Coordinating Board to create an action plan for addressing sexual assault on college campuses.

Benjamin Hardy

TAXES

Some help for the working poor and lots of punting.

Give modest credit to Governor Hutchinson. In the 2013 and 2015 legislative sessions, Republican legislators pushed a massive cut on taxes on capital gains and reduced the income tax burden on all but the working poor. This session, Hutchinson provided some relief at the lower end of the tax bracket. Hutchinson pushed through a $50 million tax cut, directed at households with a taxable income of less than $21,000. The cut is misleading, though, as it targets taxable income, which is often far less than salary or adjusted gross income. In fact, Arkansas Advocates for Children & Families pointed out that 48 percent of the overall $50 million cut will go to taxpayers in the top 40 percent of earners, while only 5 percent will go to those making less than $18,000 per year.

Establishing a refundable state Earned Income Tax Credit, tied to the federal EITC, would have been considerably more beneficial to the lower 40 percent of Arkansas earners, who often have no income tax liability, but pay a large share of their income in sales tax. An EITC would have provided a more substantial boost to the working poor at less cost than Hutchinson's cut. Rep. Warwick Sabin (D-Little Rock) and Sen. Jake Files (R-Fort Smith) were behind the EITC proposal, which historically has bipartisan appeal, but they couldn't get support from Hutchinson or enough other legislators.

Hutchinson also supported legislation that exempted all military retirement pay and survivor benefits from state income taxes. The first $6,000 of military retirement pay had been exempt previously. Since most veterans aren't career soldiers and eligible for a pension, the exemption will leave out many veterans (again, an EITC would have been a better avenue). But few politicians on either side of the aisle were going to stand in the way of helping veterans — even though Hutchinson unconscionably larded the measure with unrelated tax hikes. The legislation offset the eventual $13.4 million cost of the exemption by raising the sales tax on candy and soda. Completely unrelated to veterans' retirement income, the bill provided a $6 million tax cut on soft drink syrup, which it paid for by taxing unemployment benefits and digital downloads. So, veterans with pensions got a bump and corporate interests got significant help, while folks downloading books and movies, as well as people in between jobs, got screwed.

In the "could have been worse" column, more credit for Hutchinson: He held at bay lawmakers from his party such as Sen. Bart Hester (R-Cave Springs) who wanted to cut $100 million or more in taxes — threatening essential state services in the process — by creating a commission to consider the future of tax policies in the state.

The commission will have to consider two issues the General Assembly punted on. A bill that would have required out-of-state online retailers to collect sales tax on purchases made by Arkansans stalled in the House, with several Republicans decrying the proposal as a tax increase even though Arkansans already are required to pay the tax by law (few do because it requires self-reporting.) Still, Amazon said it would voluntarily begin collecting sales tax on Arkansas customers beginning in March. Another bill that merely would have referred to voters a proposal to increase the tax on gas to pay for bonds for highway construction failed on similar anti-tax grounds.

Lindsey Millar

CRIMINAL JUSTICE

Atual reform

Act 423, "The Criminal Justice Efficiency and Safety Act," might be the most consequential piece of good legislation the General Assembly passed. It's a sprawling, omnibus law, with three primary components.

Most consequentially, it introduces swift and certain sanctioning, which means parolees and probationers who commit minor violations of the terms of their supervision will be sent for 45 to 90 days to Arkansas Community Correction facilities, where they will receive rehabilitative programming, instead of being sent to prison for significantly longer stints. Arkansas in recent years has had the fastest growing prison population in the country, fueled largely by parole violators returning to prison. Swift and certain sanctioning is expected to free up as many as 1,600 prison beds and save the state as much as $30-$40 million.

The law also seeks to divert people who commit nuisance offenses because they are high on drugs or having a mental health crisis in public from jail or prison. It establishes Crisis Stabilization Units, regional facilities where people in crisis could go to receive treatment for several days. The law mandates the creation of three such units, but $5 million earmarked in the state budget for the operation of the facilities, paired with significant additional federal money the state expects to draw from Medicaid, could allow for several more CSUs to open. The locations of the CSUs have not yet been selected, but Craighead, Pulaski and Sebastian counties are thought to be leading candidates. Finally, Act 423 also requires law enforcement officers to receive crisis intervention training to help them de-escalate interactions with people amid behavioral health episodes.

The law is the product of 18 months of study and presentations by the nonprofit Council of State Governments, which reported to a Legislative Criminal Justice Oversight Task Force that bill sponsor Sen. Jeremy Hutchinson (R-Little Rock) co-chaired. Hutchinson, co-sponsor Rep. Clarke Tucker (D-Little Rock) and CSG say the new law will save the state money, which can be reinvested in effective criminal justice policies. CSG's justice reinvestment program has successfully been implemented in states across the country.

Of course, whether it's successful here will depend on policymakers seeing the reforms through. One potential stumbling block: CSG recommended that the state hire 100 new parole and probation officers to better supervise the nearly 56,000 people on parole and probation. Current supervision officers handle on average 125 cases. Governor Hutchinson's budget didn't provide for funding to hire 100 new officers, though it did make temporary funding to Arkansas Community Correction permanent, which will at least allow the department to retain the 60 officers it had hired since 2015. That's not enough, Sen. Hutchinson (who is the governor's nephew) said. He hopes a future General Assembly will approve additional funding for more officers using some of the savings generated by Act 423.

A perennial stumbling block for any criminal justice reform is the inevitable violator who commits a serious crime. A significant portion of Arkansas's recent prison growth spike came because of punitive parole policies enacted in the wake of the 2013 murder of a teenager in Little Rock by a serial parole violator. It's natural to think that locking up people who commit crimes for long stretches reduces crime, but research shows it's just the opposite, Sen. Hutchinson said.

"I've had the luxury of studying this for years now. It's hard to wrap your brain around sometimes," Hutchinson said. "Longer sentences do not, in fact, result in lower crime rates. The longer [people are] incarcerated, the greater chance of recidivism they have."

Hutchinson chaired the Senate Judiciary Committee, and many of its members, chief among them Sen. Bryan King (R-Green Forest), were hostile to the idea of moving away from incarceration in certain situations. King introduced the tough-on-crime Senate Bill 177, which would have required anyone with three stints in prison to serve at least 80 percent of any subsequent sentence. Arkansas already has a two-strikes law: After someone commits a second serious violent or sexual crime, he's required to serve 100 percent of his sentence. So King's measure would have mostly targeted low-level property and drug crimes and at huge cost. According to an impact statement, it would have added 5,499 inmates at a cost of $121 million in 2026. The total 10-year cost to the state would have been $692 million, and that's not including the significant cost of building new prison housing. King let the bill die in the House Judiciary Committee after Governor Hutchinson forcefully spoke out against it.

Three other positive new laws: Act 566, sponsored by the odd couple Rep. John Walker (D-Little Rock) and Rep. Bob Ballinger (R-Berryville), has Arkansas opt out of a section in President Clinton's sweeping 1996 welfare reform law that prevents anyone who has been convicted of a felony drug offense from receiving Temporary Assistance for Needy Families benefits. Act 1012, from legislation sponsored by Tucker and Hutchinson, allows someone on probation or parole for an offense that did not involve the operation of a motor vehicle who has a suspended drivers license because of unpaid fines or fees to continue to drive to work or school. Act 539, sponsored by Sen. Missy Irvin (R-Mountain Home) and Rep. Rebecca Petty (R-Springdale), prevents minors from being sentenced to life without parole. Before they become eligible for parole, the new law requires minors sentenced to life terms to serve 20 years for nonhomicide offenses, 25 years for first-degree murder and 30 years for capital murder. Of course, the Parole Board could repeatedly deny parole requests and force someone sentenced to a life term as a minor to spend his life in prison.

The heartbreaker of the session in criminal justice was the failure of Democratic Sen. Joyce Elliott's proposal to require racial impact statements for new criminal justice legislation. The impact statements would have provided research on whether proposed legislation would have a disparate impact on minority groups. Similar bills failed in 2013 and 2015, and this one was substantially amended to merely provide the impact statements as an option, but it died on the House floor. It was another reminder that for many white people, there is no greater insult than suggesting that they or something they do might be racist, even if the bias was unintended. One opponent, Rep. Ballinger, said he did not believe in systemic racism.

Lindsey Millar

ABORTION

Risking women's health

Women and their bodies were subjected to serious new insults this year by Arkansas legislators practicing medicine without a license.

Among the most egregious laws was the so-called "dismemberment abortion" bill, now Act 45, whose chief sponsors were Rep. Andy Mayberry (R-Hensley) and Sen. David Sanders (R-Little Rock). The bill prohibits doctors from performing what doctors believe is the safest method of second trimester abortion: dilation and evacuation. The alternatives would be something akin to a Caesarean section, in which the belly is cut open to remove the fetus, or an induced abortion, which requires the woman to go into labor to expel a fetus killed by an injection of salt water, urea or potassium chloride into the amniotic sac. Those procedures are what doctors call "high morbidity" — meaning they have a high risk of making patients sick.

Dilation and evacuation is recommended by the World Health Organization, the American College of Obstetrics and Gynecology and the American Medical Association. The difference between those organizations and the Arkansas legislature is that one group does not believe women should receive the best health care possible.

But Mayberry and Sanders and their co-sponsors think D&E, which uses a vacuum, is tantamount to butchery. But hysterectomy and induction abortions accomplish the same end as a D&E and are far less safe.

There is no exception for incest or rape in the law. And, like previous laws passed by legislators who think their particular religious beliefs give them the right to control women, the law particularly harms women who can't afford to travel to a more broad-minded jurisdiction to exercise a legal right.

Another evil of the law is that it allows a spouse, parent or guardian to bring a civil suit against the abortion provider if the woman has "received or attempted to receive" dilation and evacuation. That means, according to abortion rights activists and Mayberry himself, a husband can stop an abortion. He may have committed rape. A parent may have committed incest. Doesn't matter.

Rep. Charlie Collins (R-Fayetteville) and Sen. Missy Irvin (R-Mountain View) brought us the bill that became Act 733, the so-called "sex-selection abortion ban." Despite the fact that there is zero evidence that Arkansas women are dashing into abortion clinics because they've determined the sex of their fetus and don't like it, the bill has the potential to create an huge burden on the doctor provider.

Say a woman has had prenatal tests to see if her fetus has a genetic disorder. She learns there is a disorder and, by the way, the sex of the fetus. Her doctor must ask if she knows the gender of the fetus. If she answers that she does, the abortion must be delayed, because this new state law requires the doctor to "request the medical records of the pregnant woman relating directly to the entire pregnancy history of the woman." No abortion may be performed until every chart for every pregnancy generated by the woman's ob-gyn (or ob-gyns) and staffs and hospitals, every record generated during every trip to the ER she may have had to make, is supplied and reviewed by the abortion provider. Not only could that take a lot of time and generate a mountain of paperwork — what if the woman already had five children? — but it would also notify, perhaps against the woman's will, her doctors and their staffs that she is seeking to obtain an abortion.

The bill does not state what information in those records would suggest that the woman was hell-bent on not having another boy or girl.

"Why are physicians and the clinic made to be an investigative party into a woman's motives to have an abortion?" asked a spokesman for Little Rock Family Planning, the state's only clinic that offers abortion up to 21 weeks.

Rep. Robin Lundstrum (R-Elm Springs) and Sen. Scott Flippo (R-Bull Shoals), like Mayberry and Sanders, introduced what's called a model TRAP law (targeted regulation of abortion providers) meant to end abortion by imposing stricter inspection regulations on clinics. The bill allows the state Department of Health to make yearly trips to inspect clinic records and "a representative sample of procedures"; to regulate all aspects of the clinic "without limitation," and to collect an annual fee of $500.

While purporting to be a bill to protect women's health, the new law, Act 383, is designed to let the state shut down a clinic for facilities violations not spelled out in the legislation. It's not clear what violation would close the clinic. Towel on the floor? Out of paper towels? Scoop left in the break room freezer's icemaker?

As it happens, Little Rock Family Planning is inspected frequently, more than the once every year that the law already called for. The health department inspected the clinic four times in 2016, citing such things as discolored ceiling tiles and a chair with rips. The clinic's spokesman said some inspections are instigated by complaints from the anti-abortion protesters that picket outside.

The vague language of Act 383 "has potential for abuse. We don't know if we would be singled out and treated differently, if our license could be suspended for even minor paperwork violations," the spokesman said.

— Leslie Newell Peacock

TRANSPARENCY

The public's right to know took one step forward, two steps back.

Arkansas's robust Freedom of Information Act came under assault in 2017 as never before, with legislators proposing at least a dozen new exemptions to the open records law. Thanks to SB 131, now Act 474, by Sen. Gary Stubblefield (R-Branch), security plans of the State Capitol Police are no longer disclosable to the public; Stubblefield's reasoning was that someone seeking to do violence at the Capitol might request such plans, but the law is written so broadly that virtually any record of the Capitol police could fall under the new exemption. Stubblefield's SB 12 (Act 541) created a similar exemption for schools, including colleges and universities. HB 1236, now Act 531, by Rep. Jimmy Gazaway (R-Paragould), prevents the disclosure of a body-cam or dash-cam recording of the death of a law enforcement officer.

Thankfully, though, many anti-FOIA bills failed. The most significant was SB 373, by Sen. Bart Hester (R-Cave Springs), which proposed exempting attorney-client communications and work product from the FOIA if the client is a public entity. The force behind the bill was the University of Arkansas. The problem with this idea — aside from the fact that attorney-client communications can already be shielded on a case-by-case basis, by order of a judge — is that a public entity could declare almost any record exempt simply by emailing that record to its attorney. Had it passed, this loophole could have swallowed the entire FOIA.

On the bright side, Rep. Jana Della Rosa (R-Rogers) managed to pass HB 1427, now Act 318, to require candidates to file their monthly finance reports electronically, rather than on paper. HB 1010, now Act 616, by Rep. Warwick Sabin (D-Little Rock) extends the same requirement to political action committees and other groups. This matters because a searchable electronic database will make it much easier for the public to track contributions made to candidates and PACs, as well as their expenditures.

However, the legislature quashed an effort to shine a light on the darkest regions of campaign finance when it rejected HB 1005, by Rep. Clarke Tucker (D-Little Rock). The bill would have required disclosure of "electioneering" spending, meaning advertisements by independent organizations, nominally unaffiliated with any candidate, that dodge ethics laws by scrupulously avoiding the use of phrasings like "vote for" or "vote against." A growing number of states recognize that such ads — which have proliferated tremendously in recent years and comprise hundreds of millions of dollars in spending nationwide — are de facto campaign commercials and require them to be reported as such. Not Arkansas.

Benjamin Hardy

ANTI-LGBT

Threats stalled.

The legislature still shows animus toward people who don't fit its definition of normal, but Arkansans lucked out when three anti-LGBT bills failed. Two so-called "bathroom bills" that targeted transgender children and adults and another that would have let doctors refuse to perform a procedure if it offended their "deeply held beliefs" did not make it into law.

But the legislature also blocked a bill that would have corrected an injustice. SB 580, by Sen. Joyce Elliott (D-Little Rock), would have provided for the automatic listing of both parents' names on the birth certificates of children of married same-sex couples, an important factor in establishing inheritance and other matters. In a marriage between a man and a woman, the names of both parents are listed on a child's birth certificate, even in cases of surrogacy or artificial insemination. Arkansas is the only state that treats children of same-sex parents differently in this regard, seemingly in violation of the U.S. Supreme Court's 2015 ruling that struck down bans on gay marriage nationwide. Elliott's bill would have fixed the problem, but when SB 580 came before the Senate Judiciary Committee, vice-chair Sen. Linda Collins-Smith (R-Pocahontas) said same-sex parents could make a will if they wanted to ensure their kids get an inheritance.

Besides the children of same-sex couples, Collins-Smith doesn't much like transgender people, either. She introduced SB 774 to require that people had to use public bathroom or changing facilities that corresponded with the sex as listed on their birth certificates, and that the governing body of the public entity had to make sure the law was enforced. Little Rock Convention and Visitors Bureau director Gretchen Hall and Verizon Arena General Manager Michael Marion told Collins-Smith in a hearing on the bill said they could not see how it would be possible to know what was on the birth certificate on the thousands of people who might answer the call of nature at an event. "It's your job to find a way," Collins-Smith snarled. Collins-Smith pulled down the bill when she realized it was not going to pass.

The House passed a bill introduced by Rep. Bob Ballinger (R-Berryville), who also had his mind on bathroom use, to expand the state's indecent exposure law. State law already says it is a crime to expose one's genitalia with intent to gratify sexual desire; Ballinger's bill would have made it a crime simply to expose genitalia in front of a person of the opposite sex. (Maybe it's common practice to inspect genitalia in bathrooms up in Berryville.) Though the House vote for the bill was 65 to 3, the bill went down the Senate Judiciary Committee drain, as Collins-Smith's did.

Governor Hutchinson, who did not want Arkansas to suffer economically as North Carolina did when it passed its "bathroom bill" (since partially repealed), was relieved.

Another ugly bill was introduced by Rep. Brandt Smith (R-Jonesboro): the Health Care Freedom of Conscience Act, which would have allowed doctors to refuse to administer health care services that offended their "deeply held beliefs." Smith had in mind both reproductive rights and transgender reassignment surgery. There was no support for the bill from medical professionals, and state Surgeon General Dr. Gregory Bledsoe spoke against it, saying, "If you're a member of any sort of minority group ... these sorts of bills send a message that threatens you."

Leslie Newell Peacock

AVERAGE ARKANSANS 

Workers, consumers and other enemies of the state got a raw deal.

Governor Hutchinson deserves some recognition for passing a modest income tax cut for working people this session, even if it wasn't quite the boost for the poor that he claimed (see Taxes, page 15). But in almost every other way, the average Arkansan got screwed by the 2017 session.

Start with Act 986, by Rep. Laurie Rushing (R-Hot Springs), which will outlaw private class-action lawsuits under the Deceptive Trade Practices Act — a cornerstone of consumer protection law. Such suits are a powerful deterrent against businesses that intentionally scam customers in various small ways, such as false advertising or misleading promotional offers. Preventing consumers from bringing claims as a class gives the unscrupulous a freer hand to prey on the unsuspecting.

Act 606, by Rep. DeAnn Vaught (R-Horatio), provides a boon to corporations by allowing an employer to sue a worker who records a video or takes photos in the workplace "and uses the recording in a manner that damages the employer." In other words, it will stop whistleblowers from documenting unethical or illegal practices, such as animal abuse at factory farms. Animal rights organizations refer to it as an "ag-gag" bill.

Maybe the biggest prize for big business, though, was the "tort reform" measure that was referred to the 2018 ballot, Senate Joint Resolution 8. Sponsored by Sen. Missy Irvin (R-Mountain Home), it proposes a new amendment to the state constitution that would place ceilings on the noneconomic and punitive damages that may be awarded to a claimant in a civil suit. Attorney contingency fees would also be capped, at one-third of the net recovery. In short, this would sharply limit the ability of someone who was grievously harmed by an act of medical malpractice to seek compensation in court. SJR 8 sparked a bruising fight in the legislature, with a few Republicans breaking ranks to speak forcefully against abridging the right to a trial by jury. But business interests — especially nursing homes — have been pushing tort reform for years, and the measure proved unstoppable. Unless Arkansas voters reject it in 2018, that is.

Speaking of abridged rights, the legislature also referred a proposed amendment that would enshrine a voter ID requirement in the Arkansas Constitution. The hard truth is that House Joint Resolution 1016, by Rep. Robin Lundstrum (R-Elm Springs), will likely pass in 2018 given the state's electoral trends. Never mind that proponents of voter ID can't cite any documented instances of voter impersonation in Arkansas, and never mind the evidence that such measures elsewhere have resulted in voters being disenfranchised — voter ID has become gospel to Republicans, aided by President Trump's falsehoods about rampant fraud in the 2016 election. Redundantly enough, the legislature also passed a voter ID bill in addition to the referred amendment, Act 633 by Rep. Mark Lowery (R-Maumelle).

Arkansas's status as the worst state in the nation for renters went unchallenged. A bill by Sen. Blake Johnson (R-Corning), now Act 159, softened but preserved the state's unconscionable, one-of-a-kind criminal eviction statute, which courts in several counties have deemed unconstitutional. Thanks to the lobbying efforts of the Arkansas Realtors Association, Arkansas also remains the only state in which there is no minimum habitability standard for rental property. HB 1166, by Hot Springs Republican Rushing, purported to address that deficiency, but the bill's proposed standards were pitifully weak — limited to electricity, water, sewer and a roof — and it may have limited renters' meager rights in other ways, so it's best it failed.

Legislators' sympathy for landlords didn't translate to protecting small property owners railroaded by the oil industry. House Bill 2086, an effort by Rep. Warwick Sabin (D-Little Rock) to more carefully examine the use of eminent domain by pipeline companies, was drafted in response to the construction of the Diamond Pipeline, which will carry crude oil across the length of Arkansas from Oklahoma to Memphis. It failed to get out of committee.

Currently, unemployment benefits in Arkansas cover workers for a maximum of 20 weeks, which is a shorter span than any surrounding state except Missouri (also 20 weeks). Act 734 from Rep. Lundstrum will soon reduce that coverage time to 16 weeks ... and reduce weekly benefits checks paid to laid-off workers. This is despite the state's unemployment trust fund having amply recovered from the recession (it now contains around $500 million) and unemployment levels at record lows. So why trim benefits now? Simple: Employers want more money for themselves.

There was at least one good piece of consumer legislation, though, sponsored by none other than Sen. Jason Rapert (R-Conway). Act 944 aims to close a loophole exploited by payday lenders, which were driven out of Arkansas some years ago by a ban on high-interest loans but recently have been creeping back into the state by charging astronomical "fees" in place of interest.

And some bad measures failed, the most obnoxious probably being HB 1035 by Rep. Mary Bentley (R-Perryville). The bill would have prohibited SNAP recipients from using food stamps to purchase items the state Health Department deems unhealthy, such as soda; it stalled in the face of opposition from grocery stores and others. House Bill 1825 by Rep. John Payton (R-Wilburn), which went nowhere, would have seized lottery winnings from citizens who have received public assistance from the Arkansas Department of Human Services. And, efforts to chip away at workers compensation failed this time around. Got to leave something for 2019.

Benjamin Hardy



          PHP Developer - ISD Networks - Malappuram, Kerala   
>coding skill in PHP, Node.js, Java, and/or C++ >Thorough understanding of relational databases such as MySQL or similar technologies >Knowledge in wordpress
From Indeed - Sat, 08 Apr 2017 08:17:33 GMT - View all Malappuram, Kerala jobs
          OAC: Essbase – Loading Data   

After my initial quick pass through Essbase under OAC here, this post looks at the data loading options available in more detail. I used the provided sample database ASOSamp.Basic, which first had to be created, as a working example.

Creating ASOSamp

Under the time-honoured on-prem install of Essbase, the


          dbForge Fusion for Oracle 3.8   
Powerful Visual Studio plugin for efficient Oracle database development process
          Database Tour 8.2.4.33   
Cross-database tool
          Database Tour Pro 8.2.4.33   
Cross-database tool with integrated report builder
          Quick Heal Virus Database 17.00(30 June 2   
Offers you the latest virus definitions you can use to manually update
          Dbvisit Replicate 2.9.00   
Create a duplicate for your Oracle database with this program
          Mastering PHP 7   

Effective, readable, and robust codes in PHP About This Book Leverage the newest tools available in PHP 7 to build scalable applications Embrace serverless architecture and the reactive programming paradigm, which are the latest additions to the PHP ecosystem Explore dependency injection and implement design patterns to write elegant code Who This Book Is For This book is for intermediate level developers who want to become a master of PHP. Basic knowledge of PHP is required across areas such as basic syntax, types, variables, constants, expressions, operators, control structures, and functions. What You Will Learn Grasp the current state of PHP language and the PHP standards Effectively implement logging and error handling during development Build services through SOAP and REST and Apache Trift Get to know the benefits of serverless architecture Understand the basic principles of reactive programming to write asynchronous code Practically implement several important design patterns Write efficient code by executing dependency injection See the working of all magic methods Handle the command-line area tools and processes Control the development process with proper debugging and profiling In Detail PHP is a server-side scripting language that is widely used for web development. With this book, you will get a deep understanding of the advanced programming concepts in PHP and how to apply it practically The book starts by unveiling the new features of PHP 7 and walks you through several important standards set by PHP Framework Interop Group (PHP-FIG). You’ll see, in detail, the working of all magic methods, and the importance of effective PHP OOP concepts, which will enable you to write effective PHP code. You will find out how to implement design patterns and resolve dependencies to make your code base more elegant and readable. You will also build web services alongside microservices architecture, interact with databases, and work around third-party packages to enrich applications. This book delves into the details of PHP performance optimization. You will learn about serverless architecture and the reactive programming paradigm that found its way in the PHP ecosystem. The book also explores the best ways of testing your code, debugging, tracing, profiling, and deploying your PHP application. By the end of the book, you will be able to create readable, reliable, and robust applications in PHP to meet modern day requirements in the software industry. Style and approach This is a comprehensive, step-by-step practical guide to developing scalable applications using PHP 7.1 Downloading the example code for this book. You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com . If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the code file.


          Retro-inspired text adventure puts a twist on using Wikipedia   
TwitterFacebook

Reading Wikipedia just got a whole lot more fun.

A new game called "Wikipedia: The text adventure" helps you navigate and learn more about the world using a clever modification of the online knowledge database's API. You can start your journey with a suggested location or choose your own, and from there on the game dynamically generates options of places to explore.

SEE ALSO: Someone already made a Wikipedia page for 'Trump orb'

For example, I started with my home state of Maharashtra in India. From there, the game gave me a few choices of other places in the state to explore.

People have discovered classic text adventure features like the ability to pick up items, examine them, and keep them in your inventory. Read more...

More about Tech, Gaming, Game, Wikipedia, and Choose Your Own Adventure

          Accounts Payable Analyst/Treasury - (Boston)   
Title Accounts Payable Analyst/Treasury Description * Processing of accounts payable invoices to ensure that vendor payables are in accordance to contract terms and within predetermined performance measurements o Entry and distribution of invoices to appropriate business owner for approval o Process approved invoices prepare batches & submit to Supervisor o Provide reporting for vendor payable trends o Collaborate with General Ledger to ensure accurate account posting * Manage Associate Expense reports o Ensure auditing of reports for compliance to company guidelines o Coordinate with Associate and/or Department managers to resolve discrepancies * Vendor Administration o Maintain current vendor database according to company guidelines o Process and Track Daily Vendor Add / Change Requests o Conduct Bi monthly Vendor Due Diligence screening * Support Treasury Analyst Functions o Support Treasury Analyst in operational functions o Distribute daily cash reporting o Train and support weekly payable electronic fund transmission /check print o Manage and distribute Petty cash, complete month end reconciliation. o Assist with Month End Close Procedures, and other department responsibilities as required Requirements Qualifications: * Associates Degree in Accounting/Finance * 2/3 years of relevant Treasury/Banking experience (international preferred) * Previous experience in accounts payable and customer relations is preferred. * Analytical skills with knowledge of spreadsheet applications.
          Staff Accountant - (Springfield)   
Staff Accountant Western New England University is seeking a full-time Staff Accountant for the Controller's Office whose primary purpose will be to maintain the position control data base and provide general accounting support for the Controller's office during the implementation of new campus wide software programs. This position is funded by a temporary budget. Major responsibilities will include: maintenance of position control database, preparation of budget adjustments, preparation of budget advisory meeting documents and various finance committee meeting notes as well as maintenance of the Controller's office webpage.
          Accounts Payable Analyst/Treasury - (Boston)   
Title Accounts Payable Analyst/Treasury Description a€ cents Processing of accounts payable invoices to ensure that vendor payables are in accordance to contract terms and within predetermined performance measurementso Entry and distribution of invoices to appropriate business owner for approval o Process approved invoices prepare batches & submit to Supervisoro Provide reporting for vendor payable trendso Collaborate with General Ledger to ensure accurate account posting a€ cents Manage Associate Expense reportso Ensure auditing of reports for compliance to company guidelineso Coordinate with Associate and/or Department managers to resolve discrepanciesa€ cents Vendor Administrationo Maintain current vendor database according to company guidelineso Process and Track Daily Vendor Add / Change Requestso Conduct Bi monthly Vendor Due Diligence screeninga€ cents Support Treasury Analyst Functionso Support Treasury Analyst in operational functionso Distribute daily cash reportingo Train and support weekly payable electronic fund transmission /check printo Manage and distribute Petty cash, complete month end reconciliation.o Assist with Month End Close Procedures, and other department responsibilities as required Requirements Qualifications:a€ cents Associates Degree in Accounting/Financea€ cents 2/3 years of relevant Treasury/Banking experience (international preferred)a€ cents Previous experience in accounts payable and customer relations is preferred.a€ cents Analytical skills with knowledge of spreadsheet applications.a€ cents Highly organized, detail oriented and good problem solving skills.a€ cents Self motivated: Able to work independently and as a team membera€ cents Strong interpersonal, written and verbal communication skills.a€ cents Ability to react in a fast paced and changing environment while not losing focus on priorities . Source: http://www.juju.com/jad/000000009ft191?partnerid=af0e5911314cbc501beebaca7889739d&exported=True&hosted_timestamp=0042a345f27ac5dc69354e46c76daa485f5433b1779459d32f1da485eef8e872
          data entry clerk - Pro Form Products Ltd. - Milton, ON   
Receive and register documents for data entry; Business Type of Data Entry. Store, update and maintain databases Transportation/Travel Information.... $15 - $18 an hour
From Canadian Job Bank - Fri, 02 Jun 2017 18:25:51 GMT - View all Milton, ON jobs
          Exportizer Pro 6.1.2   
Database tool for viewing, editing, filtering, copying, exporting tables of different formats (DB, DBF, MDB, XLS, GDB, IB, HTML, UDL, DBC, TXT, CSV) to clipboard or file (XLS, RTF, XML, HTML, TXT, CSV, DBF) with full command line support.
          Exportizer 6.1.2   
Database utility for viewing, editing, filtering, copying, exporting database files (DB, DBF, TXT, CSV, etc.) to clipboard or file (XLS, RTF, XML, HTML, TXT, CSV, DBF) with full command line support.
          Stalking Wild Mushrooms   

Need a reason to fall in love with autumn? This time of year is often soggy and cold. It is universally viewed as a time of decline, as trees shed leaves and days grow aggravatingly short. But a walk in the woods this time of year can paradoxically lift one’s spirits, if one knows what to look for.

Like little party torches, mushrooms can be astoundingly vibrant and beautiful. Jack-o-lantern mushrooms are bright orange and glow in the dark! The fabled Fly Agaric mushroom is believed by some to be solely responsible for flying reindeer and santa’s red and white attire. Incredibly, there are even edible brilliant blue mushrooms.

Fresh edible wild mushrooms are a gourmand’s delight. While many of these wild mushrooms can be purchased in grocery stores, there is simply no comparing them to ones you may chance to pick for yourself. However, a hopeful mushroom hunter should follow this list of precautions:

  • If there’s any doubt as to the identity of a mushroom don’t eat it.
    I started learning about mushrooms last year. I would spot them while I was walking, bring them home, study them, draw them, and then attempt to identify them based upon photographs and also based upon running through a classification key, both of which can be found in this trusty mushroom guide. Most would advise that you always get a second opinion from an experienced mushroom hunter before sampling.
  • Do not pick mushrooms, rather cut the stalk free with a knife.
    That way you won’t accidentally dislodge some of the mycelium along with your mushroom.
  • Sustainably harvest
    Preferentially take good quality specimens only when there are others in the vicinity that you will leave to spread spores, thereby increasing your bounty in the future. Mushrooms are the above-ground reproductive parts of the underground organism (the mycelium), so while taking the fruiting body does not hurt the mycelium, leaving some to stand and spread spores will be beneficial.
  • Don’t visibly mark your patch or brag to others about the whereabouts.
    We are all curious creatures – most of us will investigate man-made markers in the woods. The fewer people know where you are scavenging, the more likely you can sustainably harvest your patch. Learn how to navigate by natural landmarks.
  • And on that note, keep track of where you are going.
    While hunting down mushrooms, one tends to keep one’s eyes on the ground. In this manner it is very easy to get turned around in the woods. Being aware of natural landmarks can be helpful here.
  • When, at last, you discover a patch of wild edible mushrooms, know how to store and cook them properly.
    Keep them dry and cool and eat them as soon as possible (the last part isn't that hard, really). If this is your first experience eating this particular mushroom species, only serve a small taste. Some people are more sensitive to certain mushrooms than others, so make sure your mushrooms agree with you before you indulge. Dust off humus and pine needles with a soft dish towel. Butter and garlic are your best friends. Slice your mushrooms so that they are uniformly sized. Heat, on medium, in oil and/or butter on one side for about a minute. Turn, reduce heat to simmer, add garlic and/or a splash of marsala, madeira, white wine, or sherry. Cook until you see light browning. Most mushrooms are best served without strong interfering flavors.

Want to learn more? Why not join a local mycology club or attend one of their mushroom shows?


          Updated Wiki: JobManager   

Job Manager


Use this component to run sharepoint jobs from central administration or change a job schedule.
For instance:
  • You can run a full profile synchronization without needing stsadm
  • Jobs installed with a feature receiver can be changed, without having to change and re-deploy
  • When developing new jobs, you can run them on demand, rather than using a minute schedule (which may not be ideal)

Installation


Run the commands to install and deploy as needed. Or use central admin to deploy. The final command will generate correct navigation after the solution has been deployed.

cd "C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN\"
stsadm.exe -o addsolution -filename components2.jobmanager.wsp
stsadm.exe -o deploysolution -name components2.jobmanager.wsp -local -allowgacdeployment
stsadm.exe -o copyappbincontent

Configuration


This component can be found in the operations section of central administration

operations.jpg

The job list screen is that same as the normal job definition list screen

list.jpg

From the main screen you can perform these actions:

1. Change a job's schedule
2. Execute a job
3. Clone a job for one time execution
4. Clone a job for normal execution

If the job has a schedule, you can change all the properties based on the type.
Click OK to save the new schedule.

If you wish to call the execute method on the job, select a content database from the dropdown
Then click Execute.
This calls the Execute method on the SPJobDefinition
This does not update the last run date of the job.

The last two buttons, clone schedule and clone one-time, copy the job into a new definition, with the same details
The reason i chose to implement a job clone, was because i didnt want to change the original job.
Normally one time jobs can not be changed to scheduled and visa versa. This is a sharepoint limitation.
Cloning will show the job in the standard job definition lists, with (clone) in the title
These can be deleted from the main screen.
Cloned One-time schedule jobs will be run ASAP.

manage.jpg

Future Developments


None identified

          Conservative Group Sues After Internet Giant Labels Them a ‘Hate Group’   

Gavel_2.jpg

When you’re wanting to donate to a charity or nonprofit group, it’s a good idea to do a little research to make sure they’re legitimate and don’t support causes that you oppose. That’s why GuideStar exists, and they’ve become the internet’s largest compendium of data on charities, giving potential donors information on 2,000 different charities, according to the Associated Press.

But then they did something really bizarre. Early in June, they allowed the left-leaning Southern Poverty Law Center to attach a “hate group” label to any nonprofit they didn’t like.

It was akin to giving a rambunctious child a brush full of paint and leaving them alone in a white-walled room for an hour.

GuideStar_Hate_2.jpg

The SPLC went to work, labeling 46 groups as “hate groups.” Some of them included conservative and Christian groups like the conservative Christian Family Research Council, the fundamentalist Christian American Family Association, the conservative Christian Alliance Defending Freedom, and the evangelical Christian Liberty Counsel.

The SPLC’s label sat prominently near the top of every flagged group’s page, warning potential donors to stay away. GuideStar defended their decision to let the SPLC run wild.

Not surprisingly, 41 of the “hate groups” wrote a letter to GuideStar to complain, according to Life Site News. That’s when GuideStar decided to remove the “hate group” label from those 41 groups’ pages, but they made it clear the SPLC’s information on each of those nonprofits would remain readily available upon request.

Now one of those conservative Christian Groups is suing GuideStar for defamation. Florida-based Liberty Counsel uses litigation to defend the rights of Christians in America, and now they’re using their legal muscle to defend what they do. The case is called “Liberty Counsel vs. GuideStar USA, Inc.”

Liberty Counsel chairman Mat Staver claims, "GuideStar’s CEO, Jacob Harold, is using GuideStar as a weapon to defame, harm, and promote his liberal agenda by using the SPLC to falsely label good nonprofit organizations as ‘hate groups.”

"GuideStar has not retracted its ‘hate group’ label and continues to provide false, defamatory and harmful information it pushes as fact to the public. The damage by GuideStar is far reaching because this false and defamatory labeling has been spread through scores of media sources and the internet. It also appears on the GuideStar Wikipedia page,” he continued.

As of Thursday, the Wikipedia pages for most of the targeted groups mentioned above now prominently feature the SPLC’s designation of them as hate groups, making the attack widespread indeed.

What do you think about this? Comment, react to, or share this on Facebook.


          Part-Time Dispatcher/Customer Service Rep - Cash for Trash - Stittsville, ON   
Proficient computer skills (email, google, google drive, word, excel online databases etc). Handling and balancing cash....
From Indeed - Wed, 24 May 2017 17:57:41 GMT - View all Stittsville, ON jobs
          Dispatcher/Customer Service Rep - Cash for Trash - Stittsville, ON   
Proficient computer skills (email, google, google drive, word, excel online databases etc). Handling and balancing cash....
From Indeed - Wed, 24 May 2017 17:49:26 GMT - View all Stittsville, ON jobs
          Litigation Support Specialist - Miller Thomson - Canada   
We are seeking a Litigation Support Specialist to join our Litigation team. Troubleshoot litigation support applications and databases....
From Miller Thomson - Fri, 31 Mar 2017 02:47:06 GMT - View all Canada jobs
          Information On Old Indian Coins Is Just A Click Away! (Lower Parel)   
Access the biggest online database of ancient coins of India with several categories to choose from only at Mintage World. When you want to research about [Indian old coins][1], there can’t be a more appropriate website which offers detailed description...
          How To Diagnose And Fix Incorrect Post Comment Counts In WordPress   

image

Introduction

If your WordPress comment counts got messed up, whether because of a plugin (I'm talking about you, DISQUS) or you messed with your database manually and did something wrong (yup, that's what I just did), fear not – I have a solution for you.

But first, a little background.

Comment Counts In WordPress

Here's how comment counts work in WP:

  • Posts live in a table called wp_posts and each has an ID.
  • Comments reside in a table called wp_comments, each referring to an ID in wp_posts.
  • However, to make queries faster, the comment count is also cached in the wp_posts table, rather than getting calculated on every page load.
    If this count ever gets out of sync with
  • ...
    Read the rest of this article »

          Inside Sales   
FL-Deland, Sourcing new sales opportunities through inbound lead follow-up and outbound closed calls and e-mails Understanding customer needs and requirements Close sales and achieve quarterly quotas Maintain and expand your database of prospects within your assigned territory Working hours: 8-8 Skills: Proven inside sales experience Strong phone presence and experience dialing dozens of calls per day Abilit
          Eva Crane, la fisica che amava le api   
IPAZIA – Nel mondo esistono oltre 20 000 specie di api, molte delle quali a rischio di estinzione. Le api impollinatrici sono fondamentali per l’agricoltura e per il mantenimento dell’equilibrio di numerosi ecosistemi. Per questo, oggi più che mai, è essenziale studiare questi insetti e cercare di comprendere quali azioni attuare per preservarne la sopravvivenza. L’International Bee Research Association (IBRA), oltre a occuparsi attivamente di promuovere la conoscenza delle api e delle varie forme di apicoltura, contiene il più grande database mondiale di studi scientifici e ricerche sull’argomento. A fondare e dirigere l’associazione per oltre trentacinque anni è stata una donna, fisica di formazione, che ha iniziato a occuparsi di api quasi per caso, grazie a un regalo di nozze. Si chiamava Eva Crane. Eva Crane nasce come Ethel Eva Widdowson nel 1912, in un sobborgo a sud di Londra. Ha una sorella maggiore, Elsie, che come lei mostra una precoce passione per la scienza (diventerà una delle più importanti nutrizioniste del XX Secolo). Entrambe frequentano la Sydenham School, un istituto per ragazze in cui hanno modo di esprimere appieno le loro capacità. Eva eccelle in tutte le materie scientifiche e nel 1931 vince una borsa di studio per entrare [...]
          Did Beyonce and Jay-Z Accidentally Reveal Their Twins' Names?   
Post by Maressa Brown.

Beyonce and Jay Z Solange's wedding

When their eldest daughter was born in January 2012, Beyonce and Jay-Z
attempted to trademark "Blue Ivy," and it looks like now that the power couple's twins have arrived, they may have hit up the United States Patent and Trademark Office again. And it's with this move that we may very well have a believable lead on the Carter twins' names!

Shonda Andrews of BrownGirlPrints.com broke the extremely intriguing news that the babies' names may very well be ... drumroll, please ... Rumi Carter and Sir Carter.

More from CafeMom: 23 Throwback Pics That Show Beyonce's Rise From Teen to Style Queen

Check out the telling screenshot Andrews provided that seems to confirm the babies' monikers:

beyonce babies names US Patent search

Laura Wattenberg, founder of BabyNameWizard.com, feels that given the trademark drama surrounding Blue Ivy's name, Rumi and Sir may very well be IT.

"Either those are the names or Beyonce's purposely using the database to mess with us all!" Wattenberg tells CafeMom. "And I don't think you'd use the US trademark office to play a little joke on the public." 

If these are, indeed, the twins' names, Wattenberg feels like both are very much in line with two current baby naming trends. "Sir is right in keeping with one of the hottest, fastest-growing name styles among celebs and in America as a whole," she says. "That is what I call 'exalted names.' Royal or divine titles or names of gods that celebrate the baby." Other recent examples of this trend include Kourtney Kardashian and Scott Disick's son Reign, and, of course, Kim Kardashian and Kanye West's son Saint.

Meanwhile, Rumi echoes big sister's name in a way, Wattenberg notes. "From Blue Ivy, you'd be expecting another sort of compact name -- and Rumi, for instance, in terms of sound, fits with the hottest style in America: raindrop names," she explains. Raindrop names are "little, miniature names with a smooth sound."

More from CafeMom: 

Examples include super-popular boy names Noah and Liam and well-loved girl monikers Mia and Mila, Aria, and Luna (Chrissy Teigen and John Legend's daughter!). Wattenberg explains:

"Parents love these names that are totally smooth, no hard stops in them, no hard edges, and they are short but complete. They are not nicknames for anything, but they are tiny."

She suspects that Rumi may be, of course, a nod to the famous 13th-century Persian poet, but more believably, the Carters' potential choice may be linked to its Japanese meaning. "Some of Rumi's Japanese meanings include 'flowing like water' and 'beauty,'" Wattenberg says. 

All of that said, while we may assume Sir is the Carters' son and Rumi is their new daughter, we've yet to get official confirmation on the babies' gender(s). Guess we still won't know for sure 'til we get that official statement from the proud, oh-so-private parents ... 


          Health Data Analyst II   
Employer: 
Region of Waterloo

Type: 
Full-Time
Term (Duration): 
Contract

Description: 
Health Data Analyst II

Length: Temporary Full-time (for approximately 15 months)
Department/Division: Public Health & Emergency Services/Administration
Hours of Work: 35 hours per week
Union: C.U.P.E. Local 1883
Grade: 11
Salary/Wage: $58,039.80 - $65,938.60 per annum/$31.89 - $36.23 per hour
(July 1, 2017 rates)
Location: 99 Regina St., S., Waterloo
Closing Date: July 6, 2017

Description of Duties:
Ensures current epidemiological information is readily available for Public Health planning and surveillance initiatives by extracting, manipulating, analyzing, and interpreting health-related statistical data. Provides analytic support to team projects for the benefit of the Department, other Regional departments, and community partners.

Knowledge, Skills & Abilities Required:
- Thorough working knowledge of population health data analysis and management, acquired through a Master's degree (e.g., public health, health sciences, statistics, health informatics) plus 6 months of experience, or an equivalent combination of education and experience such as a Bachelor's degree in a related field plus 2 years of related experience in population health data analysis and management.
- Ability to extract, collate, analyze, and interpret health data in accordance with established principles and methodology, and use reasoning and problem solving skills to identify data quality and software issues and develop and recommend solutions.
- Skill in database development, management, and analysis; verification and quality control procedures; and in-depth knowledge of Excel and other statistical software (e.g., SPSS, SAS, Tableau, Access), and word processing software (e.g., Microsoft Word, PowerPoint). Skill in (or knowledge of and ability to learn) GIS mapping and spatial statistical software.
- Communication and human relations skills in order to communicate with internal and external stakeholders from diverse disciplines, deliver training, present information, coordinate and support project teams, and participate as an effective team member in a multidisciplinary team setting.
- Ability to communicate effectively, prepare written reports, and display statistical data in easy to understand formats. Ability to communicate technical information clearly to professional and lay audiences. Ability to draft technical documents, involving analytical or interpretive descriptions.
- Ability to travel within and outside of the region to attend meetings, workshops, training, and conferences.
- Ability to support and project values compatible with the organization.

Date Posted: 
Thu, Jun 29 2017

Job Location: 
Kitchener
Wage: 
$58,039.80 - $65,938.60 per annum/$31.89 - $36.23

How to Apply: 

Please apply online, by the closing date Jul 06, 2017 quoting competition number 2017-1538, or drop off your resume to the Region of Waterloo, Citizen Service Associate desk located on the main floor at 150 Frederick Street, Kitchener, ON N2G 4J3.

We thank all applicants in advance; however, we will be corresponding only with those selected for an interview.

The Region of Waterloo is an equal opportunity employer committed to diversity and inclusion. We encourage qualified applicants to apply and will accommodate the needs of qualified applicants under the Human Rights Code in all parts of the hiring process.

Alternate formats of this document are available upon request. Please contact the Service First Call Centre at phone number (519) 575-4400, TTY number (519-575-4608) to request an alternate format.

Deadline: 
Thu, Jul 6 2017

          Database Marketing Manager - San Manuel Indian Bingo & Casino - Highland, CA   
Collaborates and interacts with marketing leaders, internal creative personnel and external agency partners to ensure that all SMC loyalty programs...
From San Manuel Indian Bingo & Casino - Sun, 14 May 2017 06:26:19 GMT - View all Highland, CA jobs
          Office Support Specialist I - Commonwealth of Massachusetts - Wakefield, MA   
Fingerprint-based check of the state and national criminal history databases on. The selected candidate for this position will be responsible for the... $43,650 - $61,086 a year
From Commonwealth of Massachusetts - Tue, 27 Jun 2017 21:50:35 GMT - View all Wakefield, MA jobs
          Migrate database for mobile application from Parse.com to similar service by caius9090   
My Android App back-end was using parse.com to hold the data. This service has now shut down but I want to see if there still some way to migrate the data out and host it on a similar service. Migration... (Budget: $30 - $250 USD, Jobs: Android, Cloud Computing, Database Programming, Mobile Phone, PHP)
          NY bild 2017-06-30   
1 ny bild har registrerats i databasen.


          NYA bilder 2017-06-28   
211 nya bilder har registrerats i databasen.


          NYA bilder 2017-06-27   
29 nya bilder har registrerats i databasen.


          NYA bilder 2017-06-26   
187 nya bilder har registrerats i databasen.


          NYA bilder 2017-06-22   
2 nya bilder har registrerats i databasen.


          NYA bilder 2017-06-16   
7 nya bilder har registrerats i databasen.


          NYA bilder 2017-06-09   
16 nya bilder har registrerats i databasen.


          NYA bilder 2017-06-08   
8 nya bilder har registrerats i databasen.


          NYA bilder 2017-06-02   
7 nya bilder har registrerats i databasen.


          NY bild 2017-06-01   
1 ny bild har registrerats i databasen.


          Internal Controls Senior Analyst NA - Mondelez International - Toronto, ON   
Lead SOX reviews including updating control documentation in GRC database and providing oversight to the IC COE team for remote testing....
From Mondelēz International - Sat, 24 Jun 2017 10:31:57 GMT - View all Toronto, ON jobs
          Azure Automation Methods   

Tim Radney of SQLskills walks through multiple automation methods you can use to manage and maintain your Azure SQL Databases.


          Provisioning just got BIGGER   

The recent launch of Redgate SQL Clone v2 has removed the previous 2TB size limit, as the tool now supports cloning databases up to a whopping 64TB. In this post, Karis Brummit explains how the increase has been possible.


          Comment on FileMaker Pro 15 – Review by FileMaker Pro 16 Review - MyMac.com   
[…] barely a year since Apple subsidiary, FileMaker, released version 15 of FileMaker Pro. No longer merely a database, nor even just a RDBMS (Relational Database […]
          Redneck DBA   

It’s 5:47AM and two things woke me up.

The dogs at the dairy next door. Like clockwork except driven by the sun and not adjusted for human convenience. Rumour has it that no “foreign” non-Homo sapien has ever made it half way down the drive-way. It may have no scientific data to back it up, but I take it is a fact.

The other is a heifer mowing grass three feet from the bedroom window. A beautiful animal, but get the fuck of my lawn! The 2-wire electric fence must have shorted due to a fallen branch and they have waltzed right into the house paddock.

I command the sleeping Blue Heeler.

KADIE! GET IN TO ‘EM!

I should have known the ensuing chaos would wake the whole family. Doesn’t matter…it’s a school day.

There is so much data to check and record. First the rain gauge, humidity sensor and wind speed average for the night. The thermometer backs up the fact that we are a having a cool summer. The chicken output remains a steady one egg per chicken per day. Experience says that any variation from that rate usually brings bad news. The Access database that manages these data sets has performed flawlessly for 8 years. If it ain’t broke…

Making my way to the network, I discover that 2 computers have rebooted during a power spike/surge. The UPS has protected the main development machine and backup server. Maybe next payday I will buy another UPS.

The spam idiots are relentless. Opera always does a great job filtering the crap but I check for false positives and find a perfect score. No sign of any problems from clients. The day just got a little brighter.

“Eat your breakfast…get dressed…your shoes are outside…where is your hat?...Hurry up or we will miss the bus!” The morning ritual is completed and we start the 15km drive down the dirt road to the bus stop.

I like this time of the day. With the radio playing mindless garbage, it gives me time to contemplate the day ahead and to try and model various database and code problems. With the kids safely on the bus, I turn for home. Nothing brilliant comes from my mind during the trip.

2 shots of locally grown coffee are consumed quick smart and I make my way towards the office. I have 4 different projects that I am currently working on. Two of them are in the .NET space while the others are a LAMP stack and a legacy VB6 data warehousing project. I find myself confusing language syntaxes each time I switch between projects. The equality and assignment operators always screw me.

I classify the programming task I am working on into one of three realms. I call them the “B” worlds. Bits for low level programming, Bytes for general programming and Boolean for the SQL work. Thankfully, the closest I have come to Bits programming in these projects is interfacing with USB devices.

The contrast between the B world and R (real) world is vital for my sanity. I like to finish the day doing something on the farm. Today I have a choice of 2 tasks. Fix the slave cylinder on the ute or turn the blood and bone fertiliser compost heap. I don’t feel like working upside down at the moment so I select the later task.

Walking back to the house brings a smile to my face as I realise that tomorrow is more of the same.


          A letter to MS press writers   

Dear Microsoft Press Writers,

As a long time user of your database products, I find myself becoming increasingly aggravated by the utter nonsense that your press releases state.

Specifically, the utter lack of knowledge of the theory that underpins your enterprise database engine.

Considering that you are increasingly trying to compete in the VLDB space against Oracle and DB2, it is imperative that no ignorance is shown when it comes to the relational model.
The men and women who manage these VLDB environments will have the “ear” of CTO’s and CIO’s. These people are usually well versed in the relational model and can spot bullshit from a mile away.

Unfortunately, your current press releases pertaining to Katmai, showed enough ignorance to be spotted by the Huygens probe currently buried under the surface of the moon Titan.

The offending text is as follows.

Beyond Relational Data
Manage any type of data including relational data, documents, geographic information and XML

Yours truly,

A DBA


          A new error   

It had been nagging me for a while.  When I started building this application, I reached into my code library and started copying and pasting various bits and pieces.

One in particular was an Error dialog control.  My exception framework uses this form to display errors to the user.  Depending on the type of exception, the interface changes to reflect the severity and type.

The original control was built nearly 6 years ago, so I quickly modified the internals (overloaded the constructor) to accept the new applications typed exception and thought no more it.

There are no technical issues with it.  It works.  But boy, it is UGLY when compared to the rest of application.

So I set about to change the interface.  A fairly risky proposition in a stable application.  The exception framework abstracts the calling of this control, so it has a very minimal profile to the rest of the application.

Feeling brave, I dove right in... 

The image below shows a database violation.  I have deliberately blanked out the name of the application. 

The client is a fan of "Little Britain" and insisted on the text in the form header.

The "Details" tab displays the stack trace, and the "Environment" contains information about the computer, OS and the applications state.

So what do you think? 


          What Will That College Degree Cost In Kansas? Online Tool Lets Families Know   
Students who complete an associate’s degree at Pratt Community College that prepares them to become electrical linemen earn just under $100,000 annually five years after graduation, according to a massive database now available online as an interactive tool. That is the fastest route to such high earnings among the more than 1,000 degree programs at Kansas’ 32 public two-year and four-year colleges and universities, a fact that doesn’t surprise the program’s director, David Campbell. Graduates enter five-year paid apprenticeships upon completing their degrees , Campbell said. Then, as journeymen, they often face challenging conditions on the job. “The risk of the heights that they work at — and the high voltages that they're around every day,” he said by way of example. “And then we also have the issues of the storms — ice storms in the winter time, blizzards, and tornadoes, thunderstorms in the summer.” Publishing median earnings associated with each degree program is just one part of
          *MERGE* Avoid reading or writing the 32 locking bytes at the end of the first meta-page of an LSM database. (tags: trunk)   
*MERGE* Avoid reading or writing the 32 locking bytes at the end of the first meta-page of an LSM database. (tags: trunk)
          Avoid reading or writing the 32 locking bytes at the end of the first meta-page of an LSM database. (tags: lsm-metapage-fix)   
Avoid reading or writing the 32 locking bytes at the end of the first meta-page of an LSM database. (tags: lsm-metapage-fix)
          Z intuicijo do inspiracije    
Izkušnje zadnjih tednov so sooblikovale vsebino, ki jo predstavljamo v novem Aktualno 2.0. Upam, da vam bodo koristile:
  • ko boste sprejemali odločitve o svojih izobraževanjih in usposabljanjih,
  • ko boste dvomili v ideje, s katerimi se boste zjutraj zbudili, jih ujeli na sprehodu, ali jih začutili med pogovorom s poslovnimi partnerji,
  • ko bo vaš pogled drugačen od vsega kar ste slišali do takrat,
  • ko vam ne bo jasno, kaj drugim ni jasno.
V okviru Vibacomovih treningov in kasneje na Upravnem odboru InCo gibanja, smo povezali dolgoletne izkušnje s področja razvoja inovacijskih ekosistemov in izkušnje zadnjih dveh let s področja razumevanja in razvoja intuicije, v nov poslovni model: celosten model razvoja inovacije – 5I (inspiracija, ideja, invencija, inovacija, izboljšava) (Slika 0). Novi model pokaže na neposredno povezavo med nezavednim in zavednim delovanjem človeka, tako v procesu generiranja idej, odločanja, kot tudi trajnostnega inoviranja.


Vir: InCo gibanje, 2013
Ključna sporočila novega spoznanja so (Slika 1):
  • eden od možnih začetkov inovacijskega procesa je navdih (inspiracija), ki pride iz polja nezavednega (drugi lahko pride iz povsem racionalnega pristopa - analize),
  • naravno orodje človeka za povezavo z nezavednim je intuicija,
  • v posameznikovem polju nezavednega so vsa njegova pridobljena znanja, izkušnje, genetski spomin, impulzi trenutnega prostora, elementi kolektivne/planetarne/kozmične zavesti,
  • navdih je povezan s človekovo pripravljenostjo za drugačnost, za spremembo, za novo,
  • svojo odprtost za navdih lahko človek poveča s čiščenjem emocionalnih, duhovnih, energijskih in fizičnih blokad, z zavestnim povečevanjem senzibilnosti čutil, z razvojem polne prisotnosti v trenutku, v katerem se nahaja,
  • vsak korak inovacijskega procesa je hkrati tudi zanka, ki se dinamično vrača v vse prejšnje točke; vsak trenutek spremeni pogoje pod katerimi delujemo, se povezujemo, soustvarjamo, zato je ključno, da smo občutljivi na vse spremembe v nas in okoli nas in na osnovi njihovega vpliva dinamično prilagajamo naše aktivnosti.
Vir: InCo gibanje, 2013

Nadaljnji razvoj modela:

1. na nivoju človeka: področje, ki smo ga ozavestili še dodatno okrepi potrebo po celostnem razvoju posameznika na vseh šestih ravneh njegovega vpenjanja v prostor (Slika 2), njegovega razvoja odnosov z vsem kar človek je in z vsem kar je okoli njega. Pri tem bo poseben izziv uravnotežen razvoj vseh 6 ravni, razumevanje več dimenzionalnega delovanja človeka in razvijanje okolja za njegov razcvet s spoštovanjem potreb ostalih deležnikov prostora/planeta.


 2. na nivoju organizacij/skupnosti: enako velja tudi za organizacije/skupnosti, ki so samostojna bitja in se tudi sama vpenjajo na različnih ravneh v prostor v katerem delujejo. Kot za posameznike, tudi za organizacije velja, da imajo svoje nezavedno. Le-to vpliva na sprejemanje odločitev vseh, ki se v njih povezujejo. Vpliva na njihovo inspiracijo, generiranje idej in oblikovanje inovacij, ki se, ali pa tudi ne, pojavljajo v organizacijskih okoljih. Za uspešno generiranje idej in tržno manifestacijo inovacij je zato pomembno vzdrževanje propustnih in aktivnih organizacijskih odnosov, ki vplivajo na pojav navdiha/inspiracije v dobro organizacije (Slika 3).  Vse to pa je odprto povabilo k oblikovanju novih struktur, poslovnih pristopov in metod dela, ki bodo korak po koraku upoštevala in integrirala nova spoznanja o tem kdo smo in s čim vse smo povezani, na kaj vse imamo vpliv in kaj vse ima vpliv na nas. Posledično sooblikujemo kolektivno zavest – grupumovanje (star slovenski izraz za skupno zdravo pametovanje), ki nas vse dela močnejše.

3. na nivoju ostalih struktur: nova spoznanja samo še utrjujejo več dimenzionalno vpetost človeka, ne le v intimnem prostoru, ampak tudi v vseh prostorih v katerih se giblje, deluje, sobiva (na geografskih lokacijah, na nivoju države, plemena, interesne skupnosti, planeta). Značilnosti organizacije v odnosu do posameznika veljajo tudi za ta nivo (točka 2). Hkrati je potrebno dodatno izpostaviti povezave med strukturami samimi, ki med seboj ustvarjajo nove povezave in vpetosti. Vsaka struktura ima svoje zavedno in nezavedno polje preko katerega vstopa v odnos z ostalimi strukturami, posledično pa tudi s človekom (Slika 4).


Zakaj so spoznanja koristna za model trajnostnega inoviranja:

Za uspešen trajnostni razvoj inovacijskih ekosistemov so pomembne vsaj tri ključne stvari:
  • celosten pristop
  • dinamične strukture, ki zagotavljajo neprestano prilagajanje in ustvarjanje inovativnih prebojev na osnovi avtentičnosti, razumevanja potreb in ustvarjanja vrednosti v dobro deležnikov inovacijskega ekosistema in širše družbe
  • neprestan osebnostni razvoj ljudi, ki so-ustvarjajo v inovacijskem ekosistemu.
Pri iskanju tovrstnih rešitev nas samo racionalni nivo našega dojemanja pusti na cedilu. Ne zmore visoke stopnje kompleksnosti uvidov in odločanja, ki so potrebni za manifestacijo trajnostno vzdržnih in pozitivnih inovacij. Zato smo dobili priložnost, da prebudimo še eno dodatno orodje, ki nam je naravno dano – intuicijo. Z njeno pomočjo dopolnimo razumsko odločanje, ga obogatimo in našim dejanjem damo trajnostnejšo vrednost, in sicer,  preko poglobljenega dialoga na nivoju  posameznika ali grupumovanja na nivoju struktur.

Kje so navedena spoznanja koristna in neposredno uporabljiva:
  1. pri oblikovanju vizij razvoja (medstrukturno sodelovanje, participativni modeli)
  2. pri razvoju novih produktov, storitev, rešitev (razumevanje potreb in obnašanj, oblikovanje ponudbe, ki ustvarja vrednost za ciljni segment strank)
  3. pri oblikovanju marketinških strategij (vzpostavljanje odkritega, neposrednega stika s tržnimi nišami, razumevanje resničnih potreb, posredovanje informacij v obliki, ki jo tržna niša razume)
  4. pri oblikovanju timov (razumevanje več dimenzionalnosti posameznikov in iskanju zmagovalne kombinacije za katere odgovornost naj prevzame generator ideje oz. vodja projekta)
  5. pri vzpostavljanju odnosov z dobavitelji znanja, materiala, polizdelkov, ipd. (razumevanje skupnega poslanstva in tržnih potreb, ki jih nagovarjamo)
  6. pri oblikovanju lastne identitete in razumevanju neposrednih in posrednih posledic, ki jih s svojimi dejanji ustvarjamo na vseh ravneh svoje pojavnosti in vpetosti.

Na koncu naj še enkrat poudarim: intuicija je pomemben kanal, orodje za dostop do nezavednih informacij, ki skupaj z zavednimi pripeljejo do pravih odločitev. Nezavedne informacije hote ali nehote vplivajo na sprejemanje naših odločitev, na pojav inspiracije (navdiha, uvida), na celostno, sistemsko delovanje. Upam, da vam bodo spoznanja pomagala pri oblikovanju stabilnega in trajnostno naravnanega okolja ter koristnih inovacij. Včasih se lahko znajdemo v slepi ulici, včasih imamo občutek, da ničesar ne razumemo. A vse to so le dokazi, da se učimo in ko prebijemo prag frustracij, se odpre novo polje, v katerem lahko uspešno delujemo. Namen današnjega prispevka je bil osvetliti še eno dodatno dimenzijo pri iskanju odgovorov na številna vprašanja, ki se nam pojavljajo tako v zasebnem, poslovnem in družbenem življenju. Naj bodo koraki plodni, radostni, s težo in globino vpliva.
 
Oj, Violeta


Priporočamo ogled: 

Kako lahko sodelujemo 
Kje se lahko srečamo

Zanimivosti

          Prostor in intuicija   
Ko smo v okviru  InCo gibanja v lanskem letu v prostor umestili celostno odločanje in vlogo intuicije v njem, smo že na začetku začutili, da smo odprli pomembno temo, ki je skupna različnim področjem dela in ustvarjanja. Odnosi med znanjem, izkušnjami in nezavednim  ter njihovimi vplivi na naše odločanje, percepcijo, dojemanje in sprejemanje, so neizmeren vir navdiha, idej, invencij in inovacij. Že samo zavedanje obstoja teh odnosov in njihove pojavnosti v različnih prostorih, nas bogati, nam pomaga razumeti kdo smo in kako se gibljemo in odzivamo. Naš današnji gost neprestano raziskuje prostore, ki nastajajo v odvisnosti od elementov, ki ga definirajo. Odkriva njihova obnašanja, njihove vplive na naše dojemanje in razloge, ki pripeljejo do njihovega nastanka. To je Orova zgodba. Dovolite, da vas prevzame in navdahne. Morda bo tudi zaradi njegovih besed vaš svet bogatejši.

V: Kdo je Or Ettlinger?
Or:
Navdušujejo me kompleksni sistemi in odkrivanje preprostih načel v njihovem ozadju – vzeti nekaj, kar se zdi kompleksno, in to pojasniti, narediti dostopno in uporabno. Gre za način razumevanja sveta, ki ga nasploh zasledujem tako v življenju kakor na različnih področjih mojega zanimanja: v arhitekturi, umetnosti in računalniški tehnologiji, pri ustvarjalnosti, lepoti, digitalnih podobah, v kulturi, pri dizajnu, v osebnem razvoju, glasbi in človeški naravi … Na vsakem od teh področij sem lahko vedno tudi hkrati izvajalec ali teoretik, učitelj ali študent oziroma celo več od teh vlog istočasno. Na nekaterih področjih sem že dosegel dokazane uspehe, na drugih pa še delam na tem.
Kratek življenjepis
Dr. Or Ettlinger je vizualni umetnik, teoretik in docent na Fakulteti za arhitekturo Univerze v Ljubjani. Njegovo delo leži na stičišču umetnosti, arhitekture in računalnikov.

V: Kaj vam pomeni »prostor«?
Or:
Prostor je kontekst, v katerem se pojavijo stvari, je ozadje, ki jim omogoči, da se prikažejo in postanejo to, kar so.

V: Naslov vaše knjige je Arhitektura virtualnega prostora. Kaj vam pomeni virtualni prostor? Kako bi ga opredelili?
Or:
S tem vprašanjem sem se dolgo časa ukvarjal in pri tem tako študijsko kakor praktično predelal različna povezana področja. Na koncu je to pripeljalo do razumevanja virtualnega prostora, ki se precej razlikuje od večine splošno sprejetih tovrstnih pogledov. Virtualnega prostora namreč ne obravnavam v povezavi z računalniško tehnologijo, niti ga ne vidim kot še enega sveta imaginacije in mi ne pomeni splošnega načina govorjenja o stvareh, ki sicer obstajajo, a so precej neotipljive. Vsi ti so lahko koristni kot metafore, toda sam sem želel najti konsistentnejšo opredelitev tega, kar naj bi nakazovale te metafore.

Zato sem virtualni prostor namesto tega začel prvenstveno obravnavati kot vizualni pojav: vizualni svet, ki ga doživljamo in je ločen tako od fizičnega prostora okoli nas kakor od mentalnega prostora naše domišljije. Ali z drugimi besedami, gre za slikovne podobe različnih vrst – slike, fotografije, TV, filmi, video igrice, itd. Kadarkoli pogledamo kakšno od njih, se v osnovi ponovi isto: gledamo fizičen predmet z določenim vzorcem na njem, vendar vzorca sploh ne zaznamo. Pač pa preko vzorca vidimo prostor in predmete, za katere se zdi, da stojijo znotraj, v  tem prostoru. Prostor, ki ga tako vidimo, čeprav tam fizično ne obstaja, je tisto, kar pojmujem kot virtualni prostor.

V: Kje vidite glavne razlike med fizičnim prostorom in virtualnim prostorom? Kaj so dejavniki razlikovanja?
Or:
No, kot prvo, prvi prostor JE fizičen, drugi pa je samo VIDETI fizičen, čeprav ni. Toda samo bistvo virtualnega prostora je v  tem, da mora, vsaj deloma, poustvariti videz fizičnosti, da sploh lahko obstaja. Drugače je vzorec, skozi katerega virtualni prostor vidimo, zgolj vzorec in ne postane nič drugega.

Seveda fizični prostor tudi vsebuje nas – fizična živa bitja – virtualni prostor pa ne. V virtualni prostor ne moremo stopiti dobesedno (kljub številnim tovrstnim fantazijam v popularni kulturi), toda mentalno se nedvomno z njim lahko ukvarjamo na številnih ravneh. Mnoge vrste umetnosti so skozi stoletja poskušale najti način, kako ustvariti tovrstno izkušnjo. Nikakor ne velja, da so se te reči pojavile šele nedavno, v času računalniške tehnologije, kot se morda zdi danes.

V: Kaj pa slikarska umetnost ali novi mediji? Za kakšno izkušnjo gre?
Or:
Med starimi in novimi mediji tozadevno ne vidim razlike. Gre le za različne tehnike uresničevanja istega učinka. Napeto platno s skrbno nanesenim barvnim vzorcem nam lahko omogoči, da skozi njega vidimo nekaj, česar ni na samem platnu. Na podoben način lahko na zid projeciran digitalno ustvarjen razpored barvnih točk doseže povsem isti učinek. Oba medija nas lahko tako mentalno kakor emocionalno zaposlita na isti način. Seveda gre pri obeh za povsem drugačne tehnične značilnosti in kulturni izvor, zato so upravičeno predmet različnih teoretičnih pristopov. Toda v obeh primerih vizualne upodobitve je rezultat izkušnja prostora, kjer fizično ni nobenega prostora.

Photo: The cover of  book
“The Architecture of Virtual Space”
Source: Or  Ettlinger
V: Ali menite, da imata fizični prostor in virtualni prostor podoben učinek na ljudi?
Or:
Da in ne. Fizični prostor ima zelo neposreden, takojšen učinek na nas. Ne moremo ne biti v fizičnem prostoru. Toda njegove prisotnosti se lahko zavedamo do različne mere. Virtualni prostor pomeni drugačno vrsto prisotnosti in čeprav se v njem ne nahajamo dobesedno, nas izkustveno še vedno zaobjema v večji ali manjši meri. Na primer, ko gledamo film, svojo pozornost iz fizičnega prostora, v katerem se nahajamo, aktivno scela preusmerimo na lokacije, predstavljene v virtualnem prostoru.

Toda tudi ko na zasneženi ulici stopimo mimo reklamnega panoja s sliko palme na plaži v soncu, se, ne glede na to, ali se tega zavedamo ali ne, naša ulica vizualno razširi in vključi tudi virtualni prostor. Virtualen ni v smislu, da prikazana plaža nikjer ne obstaja, ampak da prav tukaj fizično ne obstaja. Seveda je mogoče, da v resnici točno v tej obliki ne obstaja nikjer, toda dovolj je že to, da nas mentalno zaposli in se nas vsaj do določene mere živo dotakne. Če oglaševalcem uspe njihov namen, nas celo zbode, da se namesto tukaj zdaj tam ne nahajamo tudi fizično.

V:  Kako pa je z odzivom ljudi? Zakaj se v različnih prostorih počutimo bolje ali slabše. Ali arhitekti upoštevajo občutljivost ljudi?
Or:
Nekateri arhitekti se tem stvarem posvečajo bolj, drugi manj. Eni se pri oblikovanju stavbe posvečajo predvsem tehnološko ustrezni konstrukciji, druge bolj zanima funkcionalna učinkovitost, tretje pa kulturni vtis, za katerega so prepričani, da ga morajo podati. So tudi taki, ki nedvomno upoštevajo človekovo počutje v zgradbi ali izven nje. Predvsem v bolj tradicionalnih formah arhitekture je bilo upoštevanje udobnosti življenjskega prostora inherenten del oblikovanja, zato ga ni bilo treba zavestno upoštevati v tolikšni meri kot danes. Toda zaradi drugih zadev, ki so jim začeli dajati prednost, je bil v zadnjih sto letih ta nekoč samoumeven vidik oblikovanja potisnjen vstran in mnogi arhitekti so ga preprosto prezrli.

V: Kaj mislite, katere prvine imajo največji vpliv na razmerje med prostorom in ljudmi? Morda nam lahko kaj več poveste o svojih izkušnjah pri projektu, ki ste ga izvedeli s svojimi študenti in temelji na znanstveno fantastičnem romanu, po katerem ste oblikovali arhitekturo za film po omenjeni predlogi.
Or:
Projekt je temeljil na knjigi Isaaca Asimova Konec večnosti, zgodbi, ki se na Zemlji v prihodnosti odvija v različnih obdobjih med leti 7000 in 250000. Gre za čas, ki je tako oddaljen od našega, da nič iz zgodovine ali današnjih špekulacij o prihodnosti ni relevantnega za oblikovanje v tistem času. Poleg tega se deli zgodbe dogajajo povsem izven časa. Menim, da je bil to skrajen oblikovalski izziv.

V: Kako pa se sploh lotite takega projekta? Katere prvine uporabite v enačbi?
Or:
Takega projekta ni mogoče izvesti brez velikega obrata v načinu dojemanja, saj študente šele ta lahko popelje onkraj obstoječih vzorcev mišljenja. Zaradi tega je tudi tako zanimiv in vzgojen. Ena od glavnih težav pri študiju arhitekture – ali vsake druge ustvarjalne poklice – je, da je večina izobraževalnih programov osredotočena na to, kako izdelati uresničljiv projekt. Precej manj pozornosti pa je ponavadi posvečeno samim postopkom ustvarjanja idej in vizij, ki ženejo projekt. Namen projekta je bil izzvati domišljijo študentov in okrepiti njihovo sposobnost, da imajo ves čas pred sabo mentalno sliko nastajajočega projekta.

V našem konkretnem primeru je šlo prej za iskanje dejavnikov, ki vplivajo na arhitekturo različnih kultur in obdobij, kakor pa zgolj za osredotočenje na vizualen vtis oziroma 'slog'. Zato smo najprej preštudirali različna obdobja in prostore v človeški zgodovini, proučili njihove glavne vrednote in takratno razumevanje vloge arhitekture. Potem so študenti na podlagi Asimove knjige začeli sami izumljati vrednote različnih kultur, ki se pojavljajo v zgodbi, in oblikovali njim ustrezno razumevanje, čemu služi arhitektura in kako se povezuje z njimi. Od tu dalje pa je domišljija študentov kar poletela …

Photo: A home in the 482nd Century (developed from the book
"The End of Eternity" by Isaac Asimov)
Source: Or  Ettlinger

V: Nam lahko zaupate kakšno od teh prvin?
Or:
Ne gre za formulo. Ta konkreten postopek lahko na primer dojamemo, če poskušamo razumeti, zakaj je arhitektura v našem času takšna, kakršna je. Videti sebe od zunaj je najtežja naloga, toda če današnji čas primerjamo, recimo, z obdobjem izpred dveh tisoč let, bomo takoj videli, da danes vladar ni ne Bog ne cesar, temveč ekonomija. Ko govorimo o vsakdanjih stanovanjskih blokih ali nezaslišano razkošnih hotelih, je jasno, da oboje žene ekonomija – drugače jih sploh ne bi zgradili.

Poleg tega pa živimo v času, ki je scela osredotočen na posameznikov 'jaz' in se posveča samo zadovoljevanju njegovih potreb in kapric. Orodje, ki to omogoča, je tehnologija. To so torej nekatere od bistvenih prvin, ki določajo sodobno civilizacijo in s tem tudi arhitekturo: ekonomija, individualnost, tehnologija. Če bi si radi nazorno zamislili oddaljeno prihodnost, ne da bi spremenili danes prevladujoč način mišljenja, potem bo vse, karkoli bomo ustvarili, še vednovideti kot produkt poznega 20. ali zgodnjega 21. stoletja.
Photo: A view from the 2456th Century
(developed from the book "The End of Eternity" by Isaac Asimov)
Source: Or  Ettlinger
V: Ali menite, da obstaja zveza med oblikovanjem prostora in intuicijo? Z intuicijo mislim način, da nekaj dojamemo, ne da bi se zavestno zavedali postopka, kako se je to zgodilo?
Or:
Nedvomno. Naša zmožnost, da svet dojemamo onkraj verbalne analize in čutnih vtisov, je del tega, da smo ljudje. Stopnjo te naše zmožnosti prav tako določa prostor, v katerem živimo. Ali ta vzpodbuja ta način dojemanja ali ga zavira? Vzemimo za primer običajno pisarniško zgradbo v nasprotju z gorsko kočo. Vsaka od njiju na plan prikliče povsem drugačen vidik tega, kdo smo.

V: Proti koncu vaše knjige se zahvalite mnogim navdihujočim prostorom, kjer ste ure in ure raziskovali. Knjižnice, kavarne, čitalnica v Narodni in univerzitetni knjižnici, in celo Plečnikove klopi v parku … Kako bi opisali svoj odnos do teh prostorov?
Or:
Priskrbeli so mi kontekst, v katerem sem lahko začel ustvarjati, mi dali duševni mir in me navdihnili. V njih sem začutil, da sem sam svoj človek, poln svojih idej, omogočili so mi, da sem se lahko zbral in samo ustvarjal.

V: Kaj je posebnega v teh prostorih, da to omogočijo?
Or:
Kot prvo, ne vem. Gre za intuitivno občutje. Preprosto začutim prostore, ki me navdihujejo, zato grem raje tja, ne pa drugam. Če me prosite, da natančneje premislim, potem lahko začnem iskati razlage, o katerih lahko razpravljamo na treh ravneh. Prvič, v takšnem prostoru verjetno vlada določeno ravnotežje med naravnim in umetnim, gre za določena razmerja, izbor materialov in tako dalje. To so tehnični dejavniki. Druga raven je, kaj vse lahko ti tehnični dejavniki prikličejo. To se dogaja na ravni določenih občutij, na primer, sproščenosti, občutka varnosti, ali močnega občutka, da si živ. Tretja raven je, kaj lahko takšna izkušnja vzbudi pri različnih ljudeh, na primer, bodisi ustvarjalnost in produktivnost (v mojem primeru) bodisi ravnodušje in dolgčas (v primeru nekaterih drugih, morda).

V: Kje je po vašem mnenju naslednji veliki izziv v arhitekturi?
Or:
V preteklosti je med izzive spadalo poskrbeti za domovanja množic, razviti cenejše in hitrejše gradbene tehnologije, bolj nedavno pa začeti graditi energetsko varčne hiše, ki ne škodujejo okolju. Mislim, da bo naslednji izziv za arhitekturo, kako vse te pomembne dosežke združiti z bolj človeškimi kvalitetami, ki jih je starejša arhitektura imela a jih je izgubila, ker je poskušala zadovoljiti te zahteve. Obstaja že nekaj neuspelih poskusov, da bi uresničili ta pričakovanja, zaradi katerih je vse skupaj postalo še toliko težje, a prepričan sem, da gre za izziv, ki ga je kljub vsemu vredno zasledovati.

V preteklih formah arhitekture je prisotno nekaj, kar nagovarja našo intuicijo, prebuja naše čute in izraža trajnost – teh razsežnosti pogosto ni mogoče najti v sodobni arhitekturi. Ne gre za vidni slog ali oblikovne prvine, temveč za bistvene kvalitete, ki jih utelešajo. Ne vidim pa razloga, da arhitektura v prihodnje ne bi iznašla načina, kako spet združiti te izgubljene kvalitete z dosežki preteklega stoletja in omogočiti, da bi učinkovali kot celota – ne le v primeru določenih arhitektov, ampak kot splošno uveljavljena praksa.



Hvala Or. Se že veselimo raziskovanja te teme z vami na naslednjem InCo dogodku.


Priporočamo ogled: 

Kako lahko sodelujemo 
    Kje se lahko srečamo

    Zanimivi prispevki
    Novica
    • Mag. Andreja Kodrin, ustanoviteljica in predsednica Challenge:Future prejela prestižno mednarodno nagrado za izjemne trajnostne in poslovne dosežke “SEA OF EXCELLENCE” 
    Zanimivosti

                Executive Assistant to the VP, Corporate Services - The Canadian Foundation for Healthcare Improvement - Canada   
      Excel, PowerPoint, Outlook, SharePoint, Skype for Business and advanced database. Technical/Specialized or Program....
      From The Canadian Foundation for Healthcare Improvement - Fri, 19 May 2017 00:12:44 GMT - View all Canada jobs
                Sr. Database Administrator for Georgia Department of Public Health   

                Web Developer - iBusiness Solution, LLC - Harrisburg, PA   
      Leads a technical team in the design, establishment, management, and configuration of new technologies, applications and database architectures within the...
      From conrep - Sun, 30 Apr 2017 16:00:57 GMT - View all Harrisburg, PA jobs
                L3 Ops - Database Operations - Morgan Stanley - New York, NY   
      EI is responsible for driving the production, operations, and engineering of our data centers, voice and data networking solutions, mainframe servers and...
      From Morgan Stanley - Tue, 27 Jun 2017 17:40:25 GMT - View all New York, NY jobs
                Meghdoot Version 7.9.8 for Post Offices   
      Download latest version from its official Site: ftp://cept.gov.in/Meghdoot7/Updates/Meghdootupdate7.9.8/ Download from Google DriveInstructions: Take Back of all databases pertaining to MM...

      [[ This is a content summary only. Visit my website for full links, other content, and more! ]]
                The Genki Spark documentary online + fundraising campaign   
      The Genki Spark Documentary Trailer from Misako Ono


      Tufts alumna Misako Ono's documentary The Genki Spark is available online here! It's an incredibly moving look at what The Genki Spark does and why a group like theirs is so important. Launched in 2010 at the Boston Asian American Film Festival, they are the only multigenerational, pan-Asian, women's taiko troupe in the United States. While based in Boston, they have traveled around the US and to the UK to perform and host workshops. Locally they're known for hosting two annual events –  their Making Women's History event (video from 2013 & 2014)  and the Brookline Cherry Blossom Festival which they host with Brookline High School.





      The Genki Spark just celebrated their 5th anniversary which is remarkable for a niche arts organization. Running a nonprofit is always challenging and even more so for arts and minority nonprofits. They're currently fundraising so they can continue their amazing work for another year. and are trying to reach $10,000 by Wednesday, December 16th. Any amount is appreciated! Details for how to donate are available on their website. If you don't have any spare cash you can still help by spreading the word on social media. Please help them reach their goal!


      See also:

                Online Marketing and Real Estate Investing   
      The method of making a real estate investing deal is relatively straightforward : find a buyer, find a home for the buyer, match, and close on your deal. What seems easy in thought, is not as easy in action. It takes time to drive around on the lookout for run down houses, potential deals or working with emotional sellers, and that time could really be better used. That is why online marketing for real estate investors can help you close more deals.

      traditional marketing for buyers, whether retail or investors, requires a fair portion of advertising capital. When using online marketing for real estate investors, it is possible to develop buyer leads at no cost. You can set up a site that may capture contact information and use free techniques like article writing or maybe Craigslist to send traffic to the site. You can even mix offline methods with online marketing for real estate investors. Postcards, paper adverts, and even bandit signs can send potential buyers to your website.

      With internet marketing for real estate investors, it is possible to have a database for all of the buyers' contact info that you collect. This used to be a manual process, but now you are able to save time by letting the Internet keep track of everything for you. Once you have people sign up for a buyer's list, you can offer them with great property investing or home buying content using auto-responders. the majority of auto responder services require a fee, but it is minimal compared to what's spent on other advertising techniques, such as mailing postcards.

      As buyers are given applicable content, they trust you more and are more prepared to work with you. At about that point, you may use the Net to find properties that these buyers would be prepared to buy. This again is a free strategy, and does not require driving around looking for distressed properties or For Sale By Owner signs. If you want to avoid wasting time rather than looking for properties manually, you can implement the same online marketing techniques, but gear everything towards folk who are attempting to lose properties.

      As property leads come in, you can match them to your buyer's list and make deals with only a minimal quantity of advertising investment. Web selling for property financiers can be set to run on autopilot too, so it is possible to make deals and not spend the hours doing so.

      Internet promoting for real estate investors really boils down to basics. After you learn the different systems, you can implement them to find buyers, sellers, or renters. You can even use them to sell products that you may develop like e-books, coaching courses, or training. Also, because there are numerous investors who do not know how to completely employ the methods from web marketing for real estate investors, it gives you a decided advantage to make and provide better deals.

      Although offline methods shouldn't be fully disposed of, particularly because they are able to doubtless work very well with online methods, internet marketing for real estate investors opens up free avenues to bring in leads and make more deals in a shorter period of time.

      To get a free bonus E-Book and learn about Internet Marketing for only $1, visit http://tinyurl.com/WABonus
                数据公司文件加密意识差,美国近2亿选民个人信息被泄露   
      6月20日消息,据国外媒体报道,Databases存有1.98亿个美国选民记录,然而相关人士爆出这一数据库存储在一个公开并且没有保障的服务器上。把选民信息暴露在互联网上,没有通过任何加密手段来进行保护,归根结底这次事件是由于数据公司系统配置错误,安全防御意识差导致的 ...
                Health Worker mHealth Utilization: A Systematic Review   
      imageThis systematic review describes mHealth interventions directed at healthcare workers in low-resource settings from the PubMed database from March 2009 to May 2015. Thirty-one articles were selected for final review. Four categories emerged from the reviewed articles: data collection during patient visits, communication between health workers and patients, communication between health workers, and public health surveillance. Most studies used a combination of quantitative and qualitative methods to assess acceptability of use, barriers to use, changes in healthcare delivery, and improved health outcomes. Few papers included theory explicitly to guide development and evaluation of their mHealth programs. Overall, evidence indicated that mobile technology tools, such as smartphones and tablets, substantially benefit healthcare workers, their patients, and healthcare delivery. Limitations to mHealth tools included insufficient program use and sustainability, unreliable Internet and electricity, and security issues. Despite these limitations, this systematic review demonstrates the utility of using mHealth in low-resource settings and the potential for widespread health system improvements using technology.
                Database Administrator - Oklahoma State University - Stillwater, OK   
      The Database Administrator serves as a member of the IT Enterprise Operating Systems team and works under the direction of the Assistant Manager for Database $6,310 a month
      From Oklahoma State University - Wed, 05 Apr 2017 20:28:25 GMT - View all Stillwater, OK jobs
                Exportizer Pro 6.1.2   
      Exportizer Pro is a database export tool. It allows to export data to database, file, clipboard, or printer.
                Kohana3 DB Config for SQLite    
      一直以來,我就想用SQLite來當開發用的資料庫,因為我可以把整個專案連同資料庫一起打包成壓縮檔,這樣作法除了比較好做整個備份,在換不同的工作環境時(在家或公司),也能輕鬆移植運作。 在Kohana2.3.4時,可以在system\libraries\drivers\Database找到Pdosqlite.php來使用。但是在Kohana3.0,Database被轉成為modules後,剩下的drivers似乎只有MySQL和PDO了。 從這樣看來,Ko3的開發人員可能覺得除了常用的MySQL外,其它的資料庫都用PDO去做連結就好了。從PHP官方的PDO文章來看,似乎連MySQL也用PDO連結就好了呀XD
                Database Administrator   
      Database Administrator Derby £30,000 - £40,000 + Benefits Due to growth within our clients business, we have a fantastic opportunity for an enthusiastic and experienced Database Administrator to join their team in Derby. Our clients bespoke databases contain vital information that is used to structure the business, deliver services and design ...
                #41219: WP_Query gets slow down with multiple meta query combinations   

      Hi, Recently I am working on a project in which i have to handle the complex data based on post metas. I needed to query the posts with multiple pairs of meta key and values and both joined by AND while same meta key can also contain different values in database. So when query reaches up to 10 joins of MySQL for postmeta table it gets too slow to respond, you can check by the case i described and if you feel no problem then let me know so I can send you the query request.

      I am also trying to find its solution, but can't succeeded yet.

      Have a great day, thanks. Majid


                Unlocking big data’s benefits with data visualisation   
      Unlocking big data’s benefits with data visualisation
      As every marketer knows, we have more data about our customers and how they interact with our brands at our fingertips than ever before. We have a deluge of real-time data flooding business, from a wide range of systems and sources—internal CRM databases, data managed by agencies, data from channels such as search, social, ad-servers [&hellip
                MCPlus Technician - System Completion Database Technician - Fluor Corporation - Dawson Creek, BC   
      The purpose of this position is to provide clerical and basic technical support for the department including field engineers, surveyors and document management....
      From Fluor Corporation - Tue, 13 Jun 2017 20:52:21 GMT - View all Dawson Creek, BC jobs
                Optimizing variables cache in Drupal 6   
      In Drupal 6, a number of caching strategies are incorporated to handle large traffic. One of them is the serialization of the whole variable table. It is being cached in the database and gets extracted into global $conf variable in each invoke. In one of our production sites, we faced hard time to keep up […]
                Discovering the Dictionary   

      I was well into my 20s before I knew the one on the left even existed. From that I then discovered centuries of local literature. Much of our problem is a lack of understanding.19467702 10155522165407878 3554944012027741096 o 1

      The first to compile a dictionary of Scots is thought to be Rev John Jamieson who published his Etymological Dictionary of the Scottish Language in 1808. Here is a review of a recent edition. Burns had died just 12 years before, and of course some editions of his works had contained a glossary to explain some of the Scots terms for the unfamiliar reader, which are kind of mini-dictionaries in their own right. Ulster-Scots poets like Hugh Porter (published 1813) did the same.

      There have been numerous Scots dictionaries since. Some of the Scots dictionaries use the abbreviation Uls when specifying that particular words are found in Ulster. And there are of course many examples of Ulster-Scots words being collected and published too, from William Hugh Patterson in the 1800s to James Fenton in our own day. There is also an extraordinary online project at UlsterScotsAcademy.com which everyone should know about, a volunteer project every bit as impressive as the online Dictionary of the Scots Language. I'm pleased that some of my literary discoveries of recent years have contributed to the ongoing database for UlsterScotsAcademy.com.

      As long as the Scots and Ulster-Scots literary tradition is kept in the dark, people will continue to live in ignorance. And make decisions with no understanding of context or pedigree.


                Background Music Global Market 2017 Key Players,Share, Trend, Segmentation and Forecast to 2022   
      Market Analysis Research Report on “Global Background Music Market 2017 Industry Growth, Size, Trends, Share and Forecast to 2022” to their research database. PUNE, INDIA , June 29, 2017 /EINPresswire.com/ -- Global Background Music Industry …
                Kerala Property Tax payment online   
      How to pay Property tax online in Kerala







      The Revenue and Licence System“Sanchaya” is an application software suite developed for the computerisation of Revenue System in local governments. This application handles property tax, profession tax, rent on Land and building and licenses such as Dangerous and Offensive (D&O), Prevention of food Adulteration (PFA) and Advertisement tax etc. Utility payment services like Hall booking, ambulance, vehicles, crematorium, payment on water bill etc, can also be done through this software.

      Sanchaya consists of two modules: 
      1. Sanchaya LB module captures the details of tax payee/institution and demand
      2. Sanchaya Web module (e- payment) is a web based application through which a citizen can check the tax amount due, and remit the taxes through an electronic payment gateway. It also has facility for e-filing of statements online

      Features

      1. Streamlining Revenue System
      2. General public can create online account to group various properties owned within Kerala. Various activities can be done using a single login. The owner will also get period SMS/ email alerts
      3. Status (payment successful/failed) will be intimated to the payee through email or SMS
      4. e-File property tax self assessment form (under the new plinth-area, self-assessment based rule)
      5. No expense incurred by the local body
      6. Local government can generate Demand-Collection-Balance statements at any point of time.
      7. Local governments can easily identify big defaulters and take necessary steps for revenue realisation. Details of defaulters can also be published as a list, if desired by the local government.
      8. 256 bit SSL (VeriSign) secured
      9. Multiple profession tax numbers (traders & employees) can be grouped.
      10. e-File profession tax details
      11. Linkage with FRIENDS, SPARSH, Akshaya and India Post
      12. The eSMS facility, State Data Centre and KSWAN are the common infrastructure of the State Government, utilised in the project

      Functionality

      The URL of the e payment site is www.tax.lsgkerala.gov.in. User has to login to the site and select the services (property tax etc.) offered by the site. The link for the payment will enable the user to make the payment online. Status>>

      Achievements

      1. Sanchaya property tax system was made online in these locations http://tax.lsgkerala.gov.in/epayment/OnlineLBs.php  
      2. The property tax data of all these local governments are available online at the website www.tax.lsgkerala.gov.in
      3. The handling of property tax is completely streamlined with the use of Sanchaya in these locations. 
      4. Sanchaya e-payment facility is successfully up and running in Corporation of Thiruvananthapuram. It is ready to be commissioned for Kollam and Kozhikode Corporations; Guruvayoor, Kannur and Ottapalam Municipalities; Thanalur and Manjeswaram Grama Panchayats. The facility is available for all the local governments of Kerala, and they only need to finalise the database, to utilise the facility.  
      5. Operational process of e-payment was drafted and it is being issued as a Government Order.
      6. Number of transactions so far is 427, since inception (22-Feb-2011) and up to 30-Nov-2011. Amount collected so far is about ₨ 6,31,278/-. (More publicity is planned to build awareness of the facility)

                Meldora Inc. releases Melobase v1.8.1   

      iPhone, iPad / Portable Studio / MultiTracks : Enjoy your own music with Melobase, a creativity tool for recording, listening to and organizing your own musical sequences. Melobase is a client-server solution featuring a database, a sequencer for recording and playing back your music, a metronome...


                AMD EPYC 7601 CPU Hammers SiSoft Sandra Benchmark Database With 32 Cores And 64 Threads   
      AMD EPYC 7601 CPU Hammers SiSoft Sandra Benchmark Database With 32 Cores And 64 Threads It feels a little weird to write about performance results for AMD's EPYC processors and not have to tie the word "leak" into it. As we covered just last week, AMD has finally unleashed its hugely anticipated EPYC processor line for the server market, and to say it's long overdue would be a gross understatement. There is no doubt that Ryzen

                Insights for ArcGIS at the 2017 Esri User Conference   
      With the 2017 Esri User Conference just around the corner, please make sure to visit the Insights team at SDCC – Exhibit Hall B1. Insights for ArcGIS, version 2.0 has just released with more capabilities such as new chart types, additional database support, … Continue reading
                Thread: Century: Spice Road:: General:: Looking for a database of the Trade Cards please   

      by humithmu

      Hi there,

      I've played the game, but don't own a copy. I'm really interested in the math side of the game so I was wondering if someone could direct me to a database of all of the cards that shows what trades are available.

      Or if someone was feeling super generous, a spreadsheet of the input/output trade values of each card would be much appreciated!

      Thanks,
      Michael
                Sr. Database Analyst - InZicht Consulting - New York, NY   
      Demonstrated analytics solution design and build expertise, project and team management skills, strong business consulting acumen in applying advanced analytics... $85,000 - $130,000 a year
      From Indeed - Mon, 26 Jun 2017 22:05:52 GMT - View all New York, NY jobs
                Database Engineer   

                Perl Developer - First point group - Lisbon   
      We are seeking a Perl Developer. Requirements: Portuguese Speakers Prefered. Perl exeprience in the context of object oriented web design. Experience with MVC frameworks. Experience with databases such as MySQL and Oracle. Knowledge in HTML 4/5. Knowledge in CSS2/3. Knowledge in javascript and frameworks (jQuery, Angular, Backbone). Knowledge in the use and management OF LAMP. Territory: Lisbon.
                Optometric Technician   

      Triangle Visions Optometry has a fantastic opportunity to join our growing team. For over 40 years Triangle Visions Optometry has been dedicated to serving the eye care needs of central North Carolina. We are looking for an experinced Optometric Technician to work in our Lexington office. 

      Position: Optometric Technician
      Employment Type: Full-Time
      Industry: Healthcare / Retail
      Reports to: General Manager
      Hours: Must be able to work within a retail office schedule including two Saturdays per Month if needed.

      Description: 

      A rewarding position for an individual that is highly organized, self-motivated, detail-oriented and customer-service focused. Responsible for handling front desk reception, administration, and medical pretesting duties to a loyal customer base where patient care always comes first. 

      Key Responsibilities:

      • Greet all office guests in a professional and friendly manner.
      • Operate a multi-line phone system.
      • Maintain a database of correct patient information.
      • Efficiently schedule appointments for multiple doctors across several offices.
      • Act as a liaison between patients and insurance companies to verify insurance benefits.
      • Educate patients on their individual insurance coverage and options.
      • Correctly invoice insurance and patients.
      • Accurately process patient payment transactions using point of sale software.
      • Analyze, solve, and correct customer service issues using the LEAP technique (Listen, Empathize, Ask, and Produce).
      • Keep accurate daily accounting records.
      • Cultivate an organized and orderly atmosphere.
      • Efficiently complete and precisely document appropriate pre-testing of visual acuity, eye movement, intraocular pressure (IOP), retinal photos, visual field and optical coherence tomography (OCT).
      • Instruct patients on correct contact lens wear, care, insertion and removal.
      • Accurately order, stock and dispense contact lens supplies. 
      • Achieve established Front Desk Associate and Optometric Technician goals and objectives.
      • All other duties as assigned by management.

      Qualifications:

      • AA or BA/BS desired
      • 1-3 years prior optical experience preferred.
      • Adaptable and flexible with the ability to multi-task.
      • Self-motivated and detail-oriented.
      • Interest in healthcare
      • Strong communication skills
      • Must present a professional appearance
      • Some in city travel required.

      Compensation is commensurate on experience. Benefits include paid training, paid vacation, holidays, matching 401(k), medical and vision coverage.

      Please highlight the above skills when replying with your resume.


                mysql table is marked as crashed and last (automatic?) repair failed   
      В логах MySQL вижу уже второй раз такую ошибку Table './glpi/glpi_log/' is marked as crashed and last (automatic?) repair failed 165 down vote accepted If your MySQL process is running, stop it. On Debian: sudo service mysql stop Go to your data folder. On Debian: cd /var/lib/mysql/$DATABASE_NAME Try running: myisamchk -r $TABLE_NAME If that doesn’t ... Подробнее
                CTAM: VOD TV na BR plejerima   
      U želji da olakšaju korišćenje naprednih TV servisa, kompanije Clearleap i Related Content Database (RCDb) ukrstile su svoje tehnologije da
                Using Datastream   
      Some links to training and support material for using the Datastream database.
                Mass Observation Online   
      The Library now has access to the Mass Observation Online database. This is the digital archive of the Mass Observation Archive which contains thousands of documents generated by the Mass Observation social research organisation from its inception in 1937 to the... Read More ›
                The Triple Crown list is now effectively maintained by MSF, apparently   

      Here's the moment, captured in the Openwaterpedia logs 2 weeks ago, when Steve M made a wholesale update to the WOWSA Triple Crown list, to match MSF's database-derived Triple Crown list.

      http://openwaterpedia.com/index.php?title=Triple_Crown_of_Open_Water_Swimming&action=historysubmit&diff=442249&oldid=435734

      Previously, there were a number of discrepancies, especially in ordering. Most significantly, the WOWSA list had been missing Dailza Damas Ribeiro, who was fourth overall to complete the Triple Crown back in 1995.

      Here is a Wayback Machine snapshot of the MSF list on March 16.

      Felt the need to put that on record, for reasons that probably only old-timers will understand. The work that went into producing a full, accurate, properly-ordered list was significant. Glad to find that WOWSA has decided to adopt the MSF list.

      A very belated congrats to Ms. Ribeiro, who passed away in November 2008.

      http://db.marathonswimmers.org/triple-crown/


                Use of Patient-Delivered Partner Therapy in US College Settings: Associations With Legality, Perceived Legality and Other Sexual and Reproductive Health Services.   
      Background: Young adults, including college students, have higher rates of chlamydia than the general population. Patient-delivered partner therapy (PDPT) is a partner treatment option for sex partners of individuals diagnosed with chlamydia or gonorrhea. We examined college health center use of PDPT in a national sample of colleges. Methods: During 2014 to 2015, we collected data from 482 colleges and universities (55% of 885 surveyed), weighting responses by institutional characteristics abstracted from a national database (eg, 2-year vs 4-year status). We asked whether the school had a student health center and which sexual and reproductive health (SRH) services were offered. We also assessed the legal and perceived legal status of PDPT in states where schools were located. We then estimated PDPT availability at student health centers and measured associations with legal status and SRH services. Results: Most colleges (n = 367) reported having a student health center; PDPT was available at 36.6% of health centers and associated with perceived legality of PDPT in the state in which the college was located (odds ratio [OR], 4.63; 95% confidence interval [CI], 1.17-18.28). Patient-delivered partner therapy was significantly associated with availability of SRH services, including sexually transmitted disease diagnosis and treatment of STI (56.2% vs 1.1%), gynecological services (60.3% vs 12.2%), and contraceptive services (57.8% vs 7.7%) (all P < .001). Compared with schools taking no action, PDPT was more likely to be available at schools that notified partners directly (OR, 8.29; 95% CI, 1.28-53.85), but not schools that asked patients to notify partners (OR, 3.47; 95% CI, 0.97-12.43). Conclusions: PDPT was more likely to be available in colleges that offered SRH services and where staff believed PDPT was legal. Further research could explore more precise conditions under which PDPT is used. (C) Copyright 2017 American Sexually Transmitted Diseases Association
                Samsung Galaxy Note 8 avrà Android 7.1.1   
      Samsung Galaxy Note 8, nome in codice Gr3at, è apparso sul database HTML5Test con sistema operativo Android 7.1.1. (CCM) — Di Samsung Galaxy Note 8 si parla ormai da mesi. L’ultima novità in merito al successore dello sfortunatissimo (ed esplosivo) Samsung Galaxy Note 7, riguarderebbe il sistema operativo con cui questo device verrà rilasciato. Secondo le ultime indiscrezioni, pubblicate sul
                Форум по администрированию SAP | Re: много дампов "LOAD_VERSION_LOST"   
      Т.е. базисника как обычно - прокакали...

      Скажите, а вы читали текст дампа перед тем как опардониться постом на форуме?
      Там четко и ясно написано:
      Цитата: program had to be reloaded
      from the database because a bottleneck pushed it out of the local
      program buffer. However, the program found in the database had
      a wrong load format.
      This error may occur if several application servers with different
      load formats use the same database. The load format normally changes
      when you migrate to another R3 Release

      Как починить? Найти постоянного. специалиста sap basis либо нанять временного под конкретные проблемы.

      З.ы. 140 дампов это тьфу.

      Статистика : Добавлено шрам • Пт, июн 30 2017, 16:22 • Ответы 1 • Просмотры 37

                Your Code in Spaaace!   
      In the ISS there are two Astro Pi computers, Ed and Izzy, equipped with Sense HATs, two different camera modules (visual and IR), and stored in rather special cases. They are now running code written by UK school children - the winners of a competition. The data will be feeding back soon! Inspired? If you're a UK child aged from 8-18, you can enter two new music-based coding challenges. Also, after the current set of programs have been run, the Astro Pi machines will be entering flight recording mode, recording sensor readings (including pitch, roll, yaw, rotational intensity, acceleration, humidity, pressure, temperature and magnetic field strength) into a database every 10 seconds. Example data is available now, so you can prepare for the real thing becoming available in late February / March. Can you detect crew activity, O2 repressurisation, the South Atlantic Anomaly or even a CHX dry-out? The winners of the previous competition:
      • Crew Detector - tries to determine if a crew member is nearby and then takes a photo
      • Spacecraft - visualise the sensor data as structures in a Minecraft world
      • Flags - Figure out where the ISS is above and shows the flag of the country
      • Watchdog - Monitors the environment, raising alarms if temperature / pressure / humidity move outside acceptable parameters. Compensates for the thermal transfer between the sensors and the CPU
      • Uses the IR camera to produce a measure of plant health
      • Reaction Games - tests reaction speeds during the mission to see if this changes over time
      • Radiation - uses the camera to detect high-energy space radiation

                Hoopla   

      Getting StartedDownload or stream 5 items each calendar month with no waiting lists. If you see it and you want it, you can have it. Hoopla gives you access to 500,000 movies, tv, music, books*, audiobooks*, and comics. Use Hoopla on your laptop or computer by logging in to the website, or use the Hoopla app on your mobile devices.

      For a list of supported devices, go to www.hoopladigital.com/help and click on Supported Devices.



      Getting Started

      1) w w w . h o o p l a d i g i t a l . c o m

      2) Click the blue GET STARTED button… you’ll be prompted for the following steps:

      3) Enter your email address and create a password

      4) Search for and select APPLETON PUBLIC LIBRARY

      5) Enter your Library Card Number and PIN

      6) Click the blue SIGN ME UP button



      Using It

      1) Login to the site OR open the app
      2) Find what you want and click Borrow. It's that easy. Movies, Audiobooks, and TV give you the option to Download to your device (rather than just stream by default) while Books and Comics download automatically.



      Check Out Periods

      21 days: audiobooks*, books*, & comics
      07 days: music
      03 days: movies & tv episodes (each episode counts as one download toward your total)
      When your checkout period is up, the items will automatically return and be deleted from your device.

       


      *Books & Audiobooks: you won’t find a lot of best sellers. Hoopla is actively working with the big 5 publishers, so this may change in the future.

      Database Funded by: 

                OneClickDigital Audiobooks   

      OneClickdigital provides access to thousands of downloadable audiobook titles published by Recorded Books.

      Helpful Hints & Links
      Get the correct OneClickDigital App (not the OneClickDigital eReader)
      • The name of the library is Outagamie Waupaca Library System - Owlsnet; if you just start out by typing 'out' it will appear as a dropdown.
      Reset a forgotten password

      Database Funded by: 

                Overdrive WI Digital Library   

      Downloadable audiobooks, music, eBooks, and videos. A total of 10 items may be checked out and 10 additional items placed on hold for a single account. You will likely encounter Hold Lists for some digital titles; we do not have unlimited access to digital titles. 

       HELP with Overdrive

      Database Funded by: 

                OnePlay   
      • Over 1,000 games from more than 75 different publishers: 1500+ PC titles & 500+ Android titles

      • No holds or waiting: download the games you want to your computer and play them whenever and wherever you are – no internet required*
      • Login can be used anywhere and on multiple devices
      • Parents can set a different password from the account to approve activation of game
      • Games check-out for one month. Towards the end of the check-out period, you should receive an email asking if you want to renew for another month. If you don't renew, the game is removed from your computer and game panel.

      *An internet connection is required to download and activate the game. Once downloaded and activated, games can be played while off-line

      Database Funded by: 
      Database Subjects: 

                Mango Languages   

      Mango Languages is an easy to use and effective language learning tool focusing on everyday conversation. This program provides step by step lesson plans for many different languages. Mango includes specialty, topical courses and non-English movies to help you practice comprehension.

      Mango is easy to use and can work on any operating system and with any web browser; the Mango app is free.

      Database Funded by: 

                Zinio   

      Read magazines on your tablet or smartphone with your Infosoup library card. Many popular and specialty titles available. You will need to create a Zinio account with your library card number, an email address, and a password for the Zinio site.

      Database Funded by: 
      Database Subjects: 

                1985 Coahuila/Durango lineups added to the luchadb   
      I added 1985 lucha libre lineups, mostly from the cities Torreon and Gomez Palacio, to the luchadb database over the last week. They’re integrated the different pages of this site, and they’re also just available here. This is a slow continuing project to mine the El Siglo de Torreon archive for lucha lineups and results. It’s one […]
                VMworld 2017 USA Sessions   
      I’m proud to announce that my sessions for this year’s VMworld 2017 conference in Las Vegas have turned up in the VMworld Content Catalog! I have four exciting sessions (if you’re a database geek) at the conference, and for those DBAs with VMware administrators attending this conference, tell them to attend these sessions so they [...]
                WP-Slimstat vulnerability exposes WordPress websites to SQL injection attacks   
      (LiveHacking.Com) – A recent security advisory from Sucri has revealed that the popular WordPress plugin WP-Slimstat is vulnerable to SQL injection attacks because of a weak secret key. If exploited fully the bug could allow hackers to use SQL injection attacks to download sensitive information from a susceptible site’s database, including username, and (hopefully) hashed passwords. According to Sucri […]
                Florida Arrest Records   
      Arrest checks are speedily being used as a regular function for both recruiting of new employees in addition to affirming current partners. In the interest of safety and security, Arrest record checks have assuredly evolved into the most used data as the test of human nature. The methods and trend through Florida is the same through the of rest of the country. Florida Arrest Records Examination is normally utilized to calaculate a persons adeptness for any apecific labor position.

      Conforming to state laws, Florida Criminal Records Examination are public data. Unless they are destroyed or sealed by courts, it's possibly for anyone to request arrest data from responsible government organizations. Admin fees might be necessary but for duplicating the data themselves. Also, all the criminal conviction convictions are maintained regardless of the outcome. Therefore, even plea bargain, dropped charges or other type of negotiation will still be retained on the database.

      State departments like the police departments, sheriff's offices, highway patrol and other law enforcement groups all around the state of Florida keep the data of all investigations they conduct. They even submit them every month to national repository to be logged and compiled at the state level. Too, all the Florida Criminal Records are submitted to federal agencies including the Justice and FBI Departments.

      Sorting out appropriateness in making use of arrest data specifically in official documents like pre-employment examination is not easy. Lawyers or professionals are normally needed. Another basic alternative is to just get the arrest data from independent data companies and empower them to handle the legality issues. These companies are composed of technicians on the subject of data in their own right.

      Florida Criminal Records Examination are generally employed for examining potential contractors by the companies for clandestinely examining friends, neighbors, relatives or just pertaining to any persons. Free Criminal Records Examination might be attained from the multiple county enforcement organizations or state departments. It should be understood that there are restrictions on arrest record check official treatment and use. People must check their singular arrest data often to safeguard against inaccuracies and errors.

      Florida Arrest Records Examination are the quickest and efficient way of calculating an a persons tendency for mishaps with the law. Florida State Repository only permits inclusion of offense and wrong convictions however, arrest data are certainly public data in Florida state laws unless they have been sealed or removed by the courts. As a result, arrest data are available by any citizen. In Addition, an arrest record is entered for each felony conviction whether or not imprisonment resulted.

      Florida Arrest Records
                Basics of Cheap Windows Hosting    
      Putting a persons company on the web where a person use web - Affordable Windows hosting group, is a very vital decision to make. A person may not be provided the perfect plan that a persons company actually needs by the business, which is cheap windows hosting a persons company on the web. There are some items to be considered while a person are hunting for a reputable business that can be a host to a persons company in order to help a person on the web front.

      Windows or Linux Web Affordable Windows Hosting.

      The host states an operating system is needed, that is the operating system an individuals server may use. If a person are using windows operating system, a person actually don't need a windows affordable hosting. In addition, a person don't require it if a person are making a persons pages at the front. They add advantages like .asp and .net-programming abilities if they use windows web server Affordable Windows Hosting and it shows itself to be actually powerful. Listen carefully, taking advice from the one who is completing this job for a person and creating all pages.

      Preferred Affordable Windows hosting services:

      Try to look for the Affordable Windows Hosting Company that offers a person both services and quality in the method a person need. Some of the affordable products that top the charts are:

      Decreased fee or free domain registration - It reflects the professional specifics if the domain is the name of the business. Many hosts will give this style of facility for free and also it will make sure that all updating and needed renewals are completed on time.

      Secure server sales on their server - Security is very vital as a person make or accept payments on Internet; also a person send a lot of personal information, which should not be shared, so security is necessary. It should provide SSL in its basic rate.

      PHP and mySQL support - Database support and PHP are the tools that are very powerful. A person get it if a person choose UNIX server. It runs programs for a person; make forums, content management and more. It also includes e-commerce.

      Design Services at lower prices - Professional web designing is also provided for customers at a good rebate. A person should most assuredly take advantage. A Professional website gives a good impression on a customer so they feel at ease about all this.

      Web-mail services and POP Email Boxes - Pop email boxes show themselves to be vital so get that advantage and a person can give all a persons staff and all offices an individual email id. There is a service called a web mail service which offers a person the power to enter a persons email account from any location.

      A person can come across a cheap windows hosting source that recommends all the traits that a person want within a persons means. It just takes a modest assessment shopping to happen along with the greatest company Affordable Windows Hosting for a persons professional web site.

      Basics of Cheap Windows Hosting
                Quick Heal Virus Database 17.00(30 June 2   
      Offers you the latest virus definitions you can use to manually update
                Dbvisit Replicate 2.9.00   
      Create a duplicate for your Oracle database with this program
                PHPRunner 9.8 B29055   
      Database driven PHP web site with no programming
                ASPRunner Professional 9.8 B29055   
      ASPRunnerPro creates set of ASP pages to access and modify any database
                Oracle Database Administrator / Performance Tuning Specialist - RPM Technologies - Toronto, ON   
      *Job Summary* The Oracle Database Administrator / Performance Tuning Specialist is responsible for the maintenance and implementation of database changes for
      From Indeed - Thu, 29 Jun 2017 16:24:19 GMT - View all Toronto, ON jobs
                DBA2 DBA - Sky System Inc - Toronto, ON   
      *JOB RESPONSIBILITIES: * As a DB2 Database Administrator II, you will provide required support for business applications using DB2 databases. As part of a
      From Indeed - Fri, 02 Jun 2017 20:38:07 GMT - View all Toronto, ON jobs
                Oracle DBA / Performance Specialist - RPM Technologies - Toronto, ON   
      The Oracle Database Administrator / Performance Tuning Specialist is responsible for the maintenance and implementation of database changes for our
      From RPM Technologies - Tue, 09 May 2017 22:27:01 GMT - View all Toronto, ON jobs
                Oracle Database Administrator - RPM Technologies - Toronto, ON   
      *About RPM* RPM Technologies provides software solutions and services to the largest financial services companies in Canada. We offer product record keeping
      From Indeed - Mon, 08 May 2017 20:40:57 GMT - View all Toronto, ON jobs
                Wine Storage Website   
      Fully featured database driven website with intelligent ‘How to choose’ section.
                Hourly Paid Teacher in Software Applications Specialist - ACS/AEC - LEA.8F - Vanier College - Vanier, QC   
      420-HSV-VA Database Design Project (75 hours). INTENSIVE DAY PROGRAM – MEQ 12....
      From Vanier College - Tue, 27 Jun 2017 17:09:43 GMT - View all Vanier, QC jobs
                5 Email Directories To Help You Find An Email Address   
      If you have been desperately trying to find an email address but have had little to no luck whatsoever, you are looking in the wrong place. Trying to find addresses on your own can be extremely time consuming and is almost like finding a needle in a haystack. This is precisely why email directories have had such great success on the internet. Here are five email directories you can use to help you find an email address.

      1. Bigfoot email search
      The first web site you want to take a look at to help you find an email is Bigfoot email search. This site has made it convenient and simple for you to find the person you are looking for. If you are looking for a particular person, you can enter their first and last name followed by their state to find long lost friends, relatives and classmates. This is a powerful tool that will help you find what you are looking for.

      2. Email finder
      The next site to look at is email finder. This web site allows you to search for free for 90 days giving you plenty of time to search through the database and attempt to find the person you are looking for. They claim to be the world’s largest directory of email addresses that are available to the public. You will be able to find current email addresses, phone information, and search through over 20 social networks all at once.

      3. Reunion.com
      After registering at this site, you will be able to search through a comprehensive list of results to help you get back in touch with the people you used to be so close to. In addition to searching by name or email address, you will also be able to search by school to find old classmates. This is one of the more popular email directories that can help you find an email address.

      4. Spock
      This is another free web site that you can take advantage of to find old friends or relatives. With this site you will not only be able to find people by name and location, but you will also be able to find them by tags and anything else related to them. This broadens the search so that you have a better chance of finding what you are looking for.

      5. Freshaddress.com
      The last site you will want to take a look at in order to find an email address is freshaddress.com. This site links old and new addresses together so that you can quickly find the person and the information you desire. In addition to old and new addresses, you will also find that you can search using a wide array of other pieces of information or criteria that you know.
      Looking for a high school sweet heart or long lost friend? Try this easy to use reversear email search engine to find an e mail address to locate them. 100% guaranteed
                JetBrains DataGrip 2017.1.5 Build 171.4694.58   
      JetBrains DataGrip 2017.1.5 Build 171.4694.58 | 147.4 Mb Meet DataGrip, our new database IDE that is tailored to suit specific needs of professional SQL developers.
                Collectorz.com Book Collector Pro 17.1.2 Multilingual   
      Collectorz.com Book Collector Pro 17.1.2 Multilingual | 16.4 Mb Collectorz.com Book Collector - use this book database application to catalog your book collection.Adding books to the database is quick and easy, no typing needed. Just type the author and title and Book Collector will automatically download all information from various sources on the internet (like Amazon and Library of Congres), including the cover image.
                PeopleSoft and Adaptive Query Optimization in Oracle 12c   
      Adaptive Query Optimization is a significant feature in Oracle 12c. Oracle has made lots of information available on the subject.(See https://blogs.oracle.com/optimizer/oracle-database-12c-is-here).Adaptive Query Optimization is a set of capabilities that enable the optimizer to make run-time adjustments to execution plans and discover additional information that can lead to better statistics…There are two distinct aspects in Adaptive […]
                Oracle Database Administrator - Splice - Ontario   
      Build a configure databases new clients, demos. Are you a passionate Database Administrator (DBAs) that specializes in Oracle products?...
      From Splice - Mon, 19 Jun 2017 11:47:07 GMT - View all Ontario jobs
                Medical Secretary: Annapolis-Bowie/Full-time, estimated start date 8/7/2017/8:00am-4:30pm - Bowie, MD   
      Proficient and correct use of all facets of medical practice software to ensure the integrity of patient database information. Assist physician with Neurosurgical research projects and administrative responsibilities as necessary such as making CME arrangements,...
                Thinking About Google Scholar   

      Last term in an online writing class one of my students submitted an annotated bibliography for an upcoming research paper. In one of her entries, she cited Google Scholar as her source. Curious and skeptical, I looked up this Google Scholar, which I hadn't heard of before. Turns out, Google Scholar functions similarly to Google's regular search engine, only it returns only "scholarly literature," rather than just any old web page. I also turned to one of CGCC's trusty librarians, who told me that Google Scholar is pretty reliable, but like any information that is freely available on the web, "students will need to do some critical thinking to evaluate it." Another problem is that GS also "directs you to sites that require payment for the articles"-- a notion that is antithetical to what a student is usually trying to accomplish (ie, find freely available information).

      My main concern, however, lies not with Google Scholar specifically. It's that GS signifies a growing number of information databases on the web-- information middlemen, essentially-- that obfuscate for students the true sources of information. When my student had cited Google Scholar, she should have actually been citing the journal that GS procured in her search. And GS may profess to value scholarly research, but can we say the same for WebMD, Wikipedia, and About.com? Some may say I'm comparing apples and oranges here, but I think the overall premise holds. A preponderance of online clearinghouses for information makes data easier to access, sure. But are these sources being responsible with their data? How can I make sure that my students are vetting their sources carefully without simply proscribing a long list of "database-type" web sites, or forcing them to only use the databases available in the college library?

      As with innumerable technologies now available to students, I'm afraid a point gained for convenience means a point lost for learning. What is a teacher to do?
                Page Group with more than one form   
      Hi Michael Uno, thank’s for this plugin. I have created a page-group with 4 sub-pages; than I have added a form in each sub-page. In the databases is stored only the informations of the last form that I save. What can I do to save the informations of all 4 forms? Thank’s in advance for […]
                Beta Databases & Catalog sites unavailable   
      The beta databases page and beta catalog search are temporarily unavailable due to technical difficulties.  We are working to resolve the issue as quickly as possible.  In the interim, please use the Databases page on the current library site and … Continue reading
                Supreme Court rejects bid for court records database    
      The Supreme Court of Virginia has embraced an interpretation of state open records law that blocks pub­lic access to the court’s own col­lection of local case information. The court ruled June 29 that it cannot be considered a “custodian” of records that it receives from individual circuit court clerks around the state, so its searchable ...
                dbGaP News and Announcements   
      Brief news and announcements from dbGaP, the database of Genotypes and Phenotypes.... click here to continue
                ClinVar News and Announcements   
      Brief Announcements highlighting recent changes and enhancements to ClinVar database.... click here to continue
                dbVar News and Announcements   
      Brief news and announcements from dbVar, a database of genomic structural variants.... click here to continue
                dbSNP News and Announcements   
      Brief announcements highlighting recent enhancements and changes to dbSNP database.... click here to continue
                PubMed New and Noteworthy   
      Brief announcements highlighting recent enhancements and changes to the PubMed and MeSH databases.... click here to continue
                Bookshelf News   
      Brief Announcements highlighting new books and features added to the NCBI Books database.... click here to continue
                Bookshelf News   
      Brief Announcements highlighting new books and features added to the NCBI Books database.... click here to continue
                dbGaP News and Announcements   
      Brief news and announcements from dbGaP, the database of Genotypes and Phenotypes.... click here to continue
                PubMed New and Noteworthy   
      Brief announcements highlighting recent enhancements and changes to the PubMed and MeSH databases.... click here to continue
                dbSNP News and Announcements   
      Brief announcements highlighting recent enhancements and changes to dbSNP database.... click here to continue
                dbGaP News and Announcements   
      Brief news and announcements from dbGaP, the database of Genotypes and Phenotypes.... click here to continue
                Bookshelf News   
      Brief Announcements highlighting new books and features added to the NCBI Books database.... click here to continue
                ClinVar News and Announcements   
      Brief Announcements highlighting recent changes and enhancements to ClinVar database.... click here to continue
                PubMed New and Noteworthy   
      Brief announcements highlighting recent enhancements and changes to the PubMed, Journals, and MeSH databases.... click here to continue
                ClinVar News and Announcements   
      Brief Announcements highlighting recent changes and enhancements to ClinVar database.... click here to continue
                dbGaP News and Announcements   
      Brief news and announcements from dbGaP, the database of Genotypes and Phenotypes.... click here to continue
                HomoloGene News   
      Announcements of new features and datasets for the NCBI HomoloGene Database.... click here to continue
                Bookshelf News   
      Brief Announcements highlighting new books and features added to the NCBI Books database.... click here to continue
                PubMed New and Noteworthy   
      Brief announcements highlighting recent enhancements and changes to the PubMed, Journals, and MeSH databases.... click here to continue
                dbVar News and Announcements   
      Brief news and announcements from dbVar, a database of genomic structural variants.... click here to continue
                SQL DBA/Developer - Addison Group - Oklahoma City, OK   
      Modify existing databases database management systems or direct programmers analysts to make changes. Design as well as develop technical solutions to define...
      From Indeed - Tue, 13 Jun 2017 14:03:41 GMT - View all Oklahoma City, OK jobs
                Manager, Laboratory - Precision Castparts Corp. - Toronto, OH   
      Provide input to the programmers and Toronto JADC representative on programming/reporting requirements for STAR databases....
      From Precision Castparts Corp. - Wed, 21 Jun 2017 16:47:25 GMT - View all Toronto, OH jobs
                Global Sorbitol Market 2017 Key Players - Gulshan Polyols, Ueno Fine Chemicals, Caixin Sugar, Cargill, Roquette   
      Global Sorbitol Market 2017 Key Players - Gulshan Polyols, Ueno Fine Chemicals, Caixin Sugar, Cargill, Roquette Global Sorbitol Market Professional Survey Report 2017 The report’s analysis is based on technical data and industry figures sourced from the most reputable databases. Other aspects that will prove especially beneficial to readers of the report are: investment feasibility analysis, recommendations

                Global Microbial Air Samplers Market 2017 Key Players - Ogawa Seiki, Qingdao Junray, Awel, Emtek, Sarstedt   
      Global Microbial Air Samplers Market 2017 Key Players - Ogawa Seiki, Qingdao Junray, Awel, Emtek, Sarstedt Global Microbial Air Samplers Market Professional Survey Report 2017 The report’s analysis is based on technical data and industry figures sourced from the most reputable databases. Other aspects that will prove especially beneficial to readers of the report are: investment feasibility

                Global Metformin Hydrochloride Market 2017 Key Players - Vistin Pharma, Harman Finochem, Aarti Drugs, Merck Sante   
      Global Metformin Hydrochloride Market 2017 Key Players - Vistin Pharma, Harman Finochem, Aarti Drugs, Merck Sante The report’s analysis is based on technical data and industry figures sourced from the most reputable databases. Other aspects that will prove especially beneficial to readers of the report are: investment feasibility analysis, recommendations for growth, investment return analysis, trends

                Global Squalene Market 2017 Key Players - Sophim, Arista Industries, Nucelis, Amyris, Seadragon Marine Oils   
      Global Squalene Market 2017 Key Players - Sophim, Arista Industries, Nucelis, Amyris, Seadragon Marine Oils The report’s analysis is based on technical data and industry figures sourced from the most reputable databases. Other aspects that will prove especially beneficial to readers of the report are: investment feasibility analysis, recommendations for growth, investment return analysis, trends

                Global Plastic Magnet Market 2017 Key Players - Mate, K&J Magnetics, MPI, Magtech Magnetic Products   
      Global Plastic Magnet Market 2017 Key Players - Mate, K&J Magnetics, MPI, Magtech Magnetic Products Global Plastic Magnet Market Professional Survey Report 2017 The report’s analysis is based on technical data and industry figures sourced from the most reputable databases. Other aspects that will prove especially beneficial to readers of the report are: investment feasibility analysis,

                Global Disposable Camera Market 2017 Key Players - AgfaPhoto, Kodak, Ilford, Fujifilm, Rollei   
      Global Disposable Camera Market 2017 Key Players - AgfaPhoto, Kodak, Ilford, Fujifilm, Rollei Global Disposable Camera Market Professional Survey Report 2017 The report’s analysis is based on technical data and industry figures sourced from the most reputable databases. Other aspects that will prove especially beneficial to readers of the report are: investment feasibility analysis,

                Thom Craver Satisfies Your #Analytics Cravings: An Interview   
      With seventeen years of industry experience under his belt, it’s an understatement to say Thom Craver knows what’s up with the World Wide Webz. Currently working as Web and Database specialist for the Saunders College of Business at Rochester Institute of Technology, Thom’s responsible for all Web and social presences. From client consulting to guest lecturing at […]
                PHP-Developer, Softwareentwickler (m/w) - Ventoro Fenster & Türen GmbH - Berlin   
      PHP-Developer, Softwareentwickler (m/w) - Backend, PHP-Framework, Database Zur Unterstützung unseres Teams suchen wir zum nächstmöglichen Zeitpunkt einen PHP-Developer, Softwareentwickler (m/w) für unser Berliner Büro, der mit uns den Erfolg von Ventoro mitgestalten möchte. ÜBER VENTORO Wir bei Ventoro digitalisieren den Fenstermarkt! Unseren Kunden bieten wir beim Fensterkauf ein Rundum-sorglos-Paket inklusive...
                Oracle and SQL Server DBA / Request Technology - Anthony Honquest / Chicago, IL   
      Request Technology - Anthony Honquest/Chicago, IL

      Oracle and SQL Server DBA

      Chicago, IL

      Prestigious Organization is looking for a Oracle and SQL server DBA who provides planning, architecture, implementation and operational support for all enterprise databases at Company. Particular focus for this individual will be for the data architecture and operations of the (SQL Server, Oracle, and/or Epic Cache) database environment(s).

      Position Responsibilities:

      Develops database architectures that incorporate new features of (SQL Server, Oracle and/or Epic Cache) and OS software. Identifies strategic directions, creates high level plans, and develop project plans to implement the new strategies.

      Architect, design, install, create architecture documentation, and performs operational support for all database environments (currently over 500 databases).

      Architect zero data loss and high availability database environments for mission critical applications.

      Assists application development teams in working with databases, planning and implementing production changes and assisting with query optimization. Responsible for analysing and translating business information (data) and technical requirements into solutions to achieve business objectives.

      Creates enterprise data architecture, works with application and infrastructure teams to produce an optimal, high level, conceptual design for projects.

      Creates project plans for complex projects and monitors and reports project status. Maintains a log of issues and coordinates activities to resolve them.

      Implements and performs capacity planning, performance monitoring and reporting processes.

      Develops maintenance and backup procedures, change control processes and operational documentation.

      Lead database development projects and advise management and users on new or optimal technologies or methods to improve the functionality and/or efficiency of the organization's databases.

      Implements and performs proactive system monitoring to ensure high-availability and performance.

      Performs ongoing performance tuning and optimization activities.

      Understanding schema and dimension options beyond relational to include such techniques as star, cubes and other options to support best hosting options.

      Position Qualifications Include

      8+ years plus of database administration experience with at least two technical platforms including (MS SQL Server, Oracle and/or Cache).

      At least 5 years' experience working with large databases.

      At least 7 years' experience with backup and recovery strategies and procedures

      Experience in performance tuning and troubleshooting.

      Experience designing and implementing High Availability configuration

      Experience working with file system that are hosted in a SAN environment.

      Epic system certification for Epic database support

      Epic Healthcare Systems and the desire to learn new database technologies.

      Strong communication, organization, planning and collaboration skills.

      Employment Type: Permanent
      Work Hours: Full Time

      Pay: $90,000 to $110,000 USD
      Pay Period: Annual

      Apply To Job
                SQL Oracle Database Administrator / Request Technology - Craig Johnson / Chicago, IL   
      Request Technology - Craig Johnson/Chicago, IL

      Prestigious Enterprise Company is currently seeking a SQL DBA with Oracle skills as well. Candidate will work with the DBA team primarily responsible for building, maintaining, administering and supporting databases in our environment. The DBA will also be involved in the planning and development of database, monitoring and solving performance issues, maintaining integrity and security of the database environment as well as troubleshooting any issues on behalf of the users.

      Qualifications:

      4+ years experience with Microsoft SQL Server and Oracle is required

      Strong in Microsoft SQL Server 200 and 2012

      Significant experience in SQL Server development

      Complex stored procedures

      Experience in all aspects of MS SQL Administration

      Backup and Recovery

      Monitoring (DMVs, Extended Events, Alerts, etc)

      Installation and Configuration

      MS SQL Server monitoring and performance tuning experience

      DB Mirroring

      Replication

      Log Shipping

      Clustering

      Strong Support & troubleshooting skills

      Excellent communication skills & detail-oriented.

      Employment Type: Permanent
      Work Hours: Full Time

      Pay: $90,000 to $110,000 USD
      Pay Period: Annual

      Apply To Job
                SQL Server Developer and DBA / Request Technology - Craig Johnson / Pembroke Pines, FL   
      Request Technology - Craig Johnson/Pembroke Pines, FL

      Prestigious Enterprise Company is currently seeking a SQL Server Developer with DBA skills. Candidate is responsible for providing support, performance tuning and design consultation for enterprise database applications. Candidate will partner with the development teams throughout the application life cycle to ensure the database is optimized ensuring maximum application performance and availability. The individual will partner with the system DBA's ensuring that the database environment is stable, reliable, robust and kept current in regards to the database and patching levels.

      Responsibilities:

      Implement and maintain the database design including physical and logical data models.

      Advanced SQL, T-SQL programming skills with ability to write, debug and tune procedures, functions and ETL packages.

      Tune database queries for optimal performance.

      Work collaboratively with application development teams and business to better understand overall application needs and make recommendations accordingly.

      Partner with project teams and interact with customers to find solutions for projects and operational issues for existing and proposed databases environments.

      Identify, analyze and solve problems related to database, applications or reporting in a thorough, timely manner.

      Develop and document standards, policies, procedures and key performance metrics that support the continued improvement of IT services.

      Assist developers with all the database activities.

      Monitor application related jobs and data replication activities.

      Perform code reviews and SQL code deployments.

      Assist QA Teams in Unit and Stress testing operations.

      Anticipate and devise possible solutions to application and database-related problems.

      Recommend and maintain SQL Server configurations for both production and development environments.

      Assist system DBA team by deploying SQL Server upgrades and service packs.

      Assist system DBA team by performing backup, recovery and archival tasks on databases management systems.

      Participate in On-Call rotation for critical production support.

      Perform all routine scheduled SQL Server and database maintenance.

      Qualifications:

      Minimum of 5 years of experience working as a database developer/architect in a SQL Server environment.

      Minimum of 3 years of experience working as a database administrator.

      Minimum of 3 years of experience working with large, complex databases in a large enterprise environment

      Possess a solid understanding of the relationships of architecture components within technology infrastructure.

      Expertise with the following technologies: SQL 2008/2012, SSIS, SSAS, SSRS.

      Experience working with .Net, C# technologies.

      Knowledge and experience working with source code version control systems.

      Excellent Data Modeling experience.

      Strong understanding of coding methods and best practices.

      MCDBA or MCSA database development certification a plus.

      Familiarity with other database technologies a plus.

      Excellent communication skills a must.

      Ability to provide 24x7 support and participate in an on-call rotation

      Employment Type: Permanent
      Work Hours: Full Time

      Pay: $100,000 to $115,000 USD
      Pay Period: Annual
      Other Pay Info: Bonus

      Apply To Job
                SQL Server Developer DBA / Request Technology - Anthony Honquest / Pembroke Pines, FL   
      Request Technology - Anthony Honquest/Pembroke Pines, FL

      Pembroke Pines, FL

      Prestigious Enterprise Company is currently seeking a SQL Server Developer/DBA. Individual is responsible for providing support, performance tuning and design consultation for enterprise database applications. Individual will partner with the development teams throughout the application life cycle to ensure the database is optimized ensuring maximum application performance and availability. The individual will partner with the system DBA's ensuring that the database environment is stable, reliable, robust and kept current in regards to the database and patching levels.

      Responsibilities:

      Implement and maintain the database design including physical and logical data models.

      Advanced SQL, T-SQL programming skills with ability to write, debug and tune procedures, functions and ETL packages.

      Tune database queries for optimal performance.

      Work collaboratively with application development teams and business to better understand overall application needs and make recommendations accordingly.

      Partner with project teams and interact with customers to find solutions for projects and operational issues for existing and proposed databases environments.

      Identify, analyze and solve problems related to database, applications or reporting in a thorough, timely manner.

      Develop and document standards, policies, procedures and key performance metrics that support the continued improvement of IT services.

      Assist developers with all the database activities.

      Monitor application related jobs and data replication activities.

      Perform code reviews and SQL code deployments.

      Assist QA Teams in Unit and Stress testing operations.

      Anticipate and devise possible solutions to application and database-related problems.

      Recommend and maintain SQL Server configurations for both production and development environments.

      Assist system DBA team by deploying SQL Server upgrades and service packs.

      Assist system DBA team by performing backup, recovery and archival tasks on databases management systems.

      Participate in On-Call rotation for critical production support.

      Perform all routine scheduled SQL Server and database maintenance.

      Background Required:

      Minimum of 5 years of experience working as a database developer/architect in a SQL Server environment.

      Minimum of 3 years of experience working as a database administrator.

      Minimum of 3 years of experience working with large, complex databases in a large enterprise environment

      Possess a solid understanding of the relationships of architecture components within technology infrastructure.

      Expertise with the following technologies: SQL 2008/2012, SSIS, SSAS, SSRS.

      Experience working with .Net, C# technologies.