Open Password Archiv Logo


Open Password – Donnerstag,
den 7. November 2019

# 657

Artificial Intelligence – LexisNexis – Machine Learning – Predictive Analytics – Prescriptive Analytics – Natural Language Processing – Robotic Process Automation – SN Telecom & IT – Financial Services – Center of Excellence – Silos – Microsoft – Satya Nadesha – AI 365 Meetings – Kevin Scott – Nokia – Risto Silaasma – Andrew Ng – Baidu – HSBC – Recruitment Drives – Data Scientists – LinkedIn – Allen Blue – Perelman School of Medicine – Kevin Mahoney – PwC – Bhushan Sethi – Ajay Davessar – Amy Ogan – Carnegie-Mellon University – American Express – Experian – Press Association (UK) – Volvo – Testing Prototypes – BT Group – GDPR – Data Privacy – Data Security – Customer Data – Data Governance – Google – Hedge Funds – Ethical Advice – Risk Monitoring – Nexis – Cyber-Sicherheit – Ipsos – TÜV-Verband – Michael Fübi – Regulierung – Digitale Transformation – IT-Sicherheitsgesetz 2.0 – Cybersecurity Act – Phishing – Ransomware – Künstliche Intelligenz – Kritische Infrastrukturen – Produktsicherheit – Risikoklassen – Datenethikkommission – Data Protection – GDPR . Guideline for Fines – BIIA – Datenschutzaufsichtsbehörden

Artificial Intelligence

Creating a Culture Supporting AI Success

By Sam Hemmant, Marketing Director – EDDM & DaaS Portfolios, LexisNexis

Artificial intelligence, machine learning, predictive and prescriptive analytics, natural language processing and robotic process automation—whatever your industry, you’ve likely heard these words with increasing frequency over the last year.

CEOs have heard that artificial intelligence (AI) and its subsets can solve pressing business problems and are racing to get their hands on the technologies and the experts who can operate them. A study by SNS Telecom & IT predicts firms in financial services alone will invest $14 billion a year in big data technologies by 2021.

But technology investments and data scientist hires are no guarantee of successful outcomes. CEOs also need to develop a strong foundation by embedding an understanding of AI throughout the company. In this white paper, we explore three factors that contribute to a successful AI culture: people, strategy and technology.

Establishing a Center of Excellence (CoE) to support AI adoption is an important first step. A CoE centralizes AI expertise and takes responsibility for developing the blueprint for AI implementations and spur innovation and growth across the business.



Large companies often resemble a series of silos. Departments responsible for products, marketing and finance develop their own data strategies and do not necessarily connect these with the rest of the business. Unsurprisingly, then, adoption of AI can be patchy across a company—some silos might use it, others might not. The only way to ensure AI works across these silos, and not against them, is if the CEO makes clear to each unit that it is a priority.

Microsoft added AI to its strategic vision in 2017, but CEO Satya Nadella did not stop there. A year later, he started weekly “AI 365” meetings which are attended by senior executives across the company, in which they update each other on their respective AI projects. In addition to ensuring greater transparency of AI applications in play across the business, the meetings enable teams to learn from others’ experiences, rather than each starting from scratch.

Kevin Scott, Microsoft’s Chief Technology Officer, suggests that having the company’s top executives in the room at the same time means problems encountered by AI projects can be solved quickly. “When there’s friction and obstacles and inefficiencies in the system, people can raise their hands and say, ‘I can’t do this thing,’” Scott says.

Nokia’s Chairman Risto Silaasmaa says CEOs must learn the ropes of AI, machine learning (ML) and data science to get the best use from the technology. “If it is so strategically important for the company, I should understand and we all should understand, at least enough to ask the right questions,” he contends. “That led me to a sort of wake-up moment that I don’t have to wait for others to explain this to me, I can actually move my butt and go back to school myself.”

“In the past, a lot of S&P 500 CEOs wished they had started thinking sooner than they did about their Internet strategy. I think five years from now there will be a number of S&P 500 CEOs that will wish they’d started thinking earlier about their AI strategy.” So Andrew Ng Chief Scientist of Baidu.



Success with AI technologies demands staff who understand how to operate these technologies. HSBC has increased its focus on digital transformation, accompanying it with a recruitment drive for 1,000 data scientists.

Competition to fill data scientist roles is fierce. LinkedIn co-founder Allen Blue says data science jobs have grown by up to 20 times in the last three years. “There are very few data scientists out there passing out their resumes,” he notes. “Data scientists are almost all already employed, because they’re so much in demand.”

Blue says that jobs related to data science and ML represent five of the top 15 fastest-growing jobs in the U.S. today. Companies should work closely with their HR team to devise a recruitment strategy, perhaps collaborating with universities with strong AI programs directly.




AI should not simply be left to the data scientists. While they understand the technology, they may not understand the particular challenges and opportunities of the company. Front-line staff, on the other hand, understand such needs better than anyone. By educating them on potential AI applications, they are better equipped to identify how AI can improve the way they work.

Nokia chairman Risto Silaasmaa told LexisNexis last year that the telecommunications firm’s 100,000 employees are being given basic training in AI, ML and big data. The University of Pennsylvania’s Perelman School of Medicine is also retraining its workforce in data science. Kevin Mahoney, executive vice dean at the School, said he wants staff to understand “how the data explosion can help you do your job better.” PwC is giving similar training to all its 55,000 employees in the U.S. “I’ve got to believe that over the next few years, data analytics is going to be prevalent,” says partner Bhushan Sethi. “It’s like digital: everyone’s going to need to have a base level understanding of it.”




AI has the potential to answer an incredibly broad range of questions, including questions we have not yet imagined. “AI is much more powerful than [anything] mankind has ever dealt with so far,” said Ajay Davessar, who has founded AI research institutes at some of India’s most prestigious universities. “It can really augment humans in the decisions they make.” This is exciting, but it also shows the need for companies not to get distracted and focus on exactly what parts of their business AI will benefit most.

Too many companies appear to be adopting AI for the sake of it, without any underlying strategy. Amy Ogan, Associate Professor in the Human Computer Interaction Institute at Carnegie-Mellon University, told LexisNexis in February that companies are scrambling for AI, ML and data scientists “even when they don’t know why they need it.”

The first step of any AI strategy should involve looking at your objectives and identifying how AI can best be used achieve those. That is the approach taken by organizations in different sectors:

  • American Express uses AI to identify fraudulent transactions. This not only helps them to spot fraudulent activity and therefore reduce the risk of regulatory fines, but automates a previously manual process, saving staff time.
  • Credit bureau Experian needs to make quick and accurate decisions on credit scores—the quicker and more accurate, the greater their edge over rivals. The company has developed an AI tool which scans relevant datasets and assigns credit scores almost instantly.
  • The UK’s Press Association has implemented robotic process automation (RPA) to produce thousands of articles each month, allowing it to conserve its human resources for in-depth interviews and investigative journalism that require emotional intelligence.
  • Volvo uses AI-optimized processes to create a system in which information on individual car parts is automatically sent back to the manufacturer, who will then inform a customer when a part needs servicing. This not only makes Volvo more attractive to customers, but it saves the company money on maintenance costs.

This is only the beginning. The possibilities for companies are almost endless, if they have a clear strategy in place.




AI and data science should not be limited to particular teams—all parts of a business should have the opportunity to use the tools. Bhushan Sethi, a partner at PwC, says the accountancy firm is embedding data analytics in every area of the company. Sethi admits PwC now feels an expectation to offer clients a strategy based on big data analytics. “It is no longer good enough to say, here’s a workable strategy; this is kind of what it might look like. We have to actually visualize what those decisions would be; what are the outcomes, what that means to growth, to financials, to engagement.”




Companies should benchmark how well AI tools have achieved the intended objectives. Has predictive analytics, for example, allowed the company to exploit a strategic opportunity or has machine learning create process efficiencies in a area of the business? If so, what lessons can other areas of the business learn from the experience? To transform the culture of the entire business to one which makes the most of AI, companies need to demonstrate the value of AI internally. Microsoft’s weekly “AI 365” meeting is a good example—leaders from across the business share problems and successes of AI applications for others to learn from.

When AI can be applied in a repeatable way, the business value increases. Companies should consider testing prototype uses of AI, then expanding their use to other teams. British multinational telecommunications holding company BT Group initially implemented ML for an automated ‘chatbot’ to answer customer service enquiries on its website. Once this was shown to work, it broadened the use of ML across the business, including in vehicle planning for its fleet business.




Companies are investing heavily in recruiting data scientists on high salaries, but too many of these data scientists spend most of their time cleaning up messy datasets. Using unreliable data for AI is like putting the wrong kind of fuel into a sports car. No matter how high-tech the AI system, if you put garbage in, you will get garbage out.

When developing a corporate data and technology strategy, companies should address the following issues:

  • Define the types of data needed and where the data should reside. Often companies have massive amounts of internal data but fail to realize the benefits of the data because it is siloed by department. Those walls need to be broken down to allow companies to capture a more complete picture of the data available and to understand where gaps may need to be filled by third-party data sources.
  • Determine the organizational roles that will take responsibility for the data. Laws like GDPR, plus a skeptical public, mean that data privacy and security are a top-of-mind issue for companies. A fragmented approach can lead to lapses in oversight—and the potential for missteps that could lead to financial and reputational damage.
  • Create guidelines for use of enterprise assets, particularly customer data. Misuse or—or failure to protect—customer data earns companies plenty of media coverage, but none of it good. Good data governance begins with clear requirements—shared across the enterprise for full visibility—on how data assets should or should not be used.



Most companies today have a mix of paper and digital records, but in the age of AI, converting data for use in machine learning or predictive analytics is a priority. In April 2019, Google announced a new platform that can analyze a scanned page and turn it into machine-readable text.

But companies should also look outside the walls of their business for data sources that deliver relevant content that is normalized for ease-of-use and enriched with metadata to facilitate faster implementation and time-to-insight from AI initiatives.

For example, financial services organizations can use unexpected sources of data to improve the predictive power of their trading decisions. Hedge funds can use AI to monitor broadcast data. Television interviews with a CEO often give a clearer indication of where a company is headed than more commonly used sources. By ingesting broadcast data into existing AI monitoring of financial and company information, hedge funds can improve detection of market signals and make better buy or sell decisions.




In the excitement about AI adoption, companies must maintain active oversight of the risks involved and seek ethical advice from data science and HR experts while considering a strategy. Microsoft is among the most enthusiastic adopters of AI, but it has tempered this enthusiasm with healthy caution.

Microsoft’s 2018 annual report reminds staff that “AI algorithms may be flawed.” It also warns against insufficient or biased information in datasets, as well as controversial data practices that could slow acceptance of AI solutions. “These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm,” the report notes.

It’s not just skewed results that pose a risk. Companies need a comprehensive approach to data security to protect the viability of AI programs that rely on confidential business or customer data. Companies should implement training and robust security standards to limit exposure to data breaches by disgruntled, careless or uninformed employees. In addition, global data privacy regulations vary, so companies must stay on top of compliance requirements and implement appropriate due diligence and risk monitoring processes for cloud service providers and other potential risk points.

But the greatest risk companies face comes from ignoring the AI revolution altogether and being left behind by their competitors. In many sectors, AI adoption remains patchy. Hedge funds and banks have used AI to improve their services to clients, to automate onboarding and KYC processes, and to use multiple data sources to make buy and sell decisions.

It has never been more important to create an AI culture; after all, how long can your company survive without oxygen? See how the advantage of Nexis® Data as a Service can help you achieve AI success.


See how the advantage of Nexis Data as a service can help you achieve AI success.


  • Comprehensive – A source universe.
  • Optimal – Flexible Bulk, RESTful and controlled content APIs deliver normalized, semi-structured data at the volume and velocity needed for big data applications.
  • Robust – Smart data, organized and enriched through a combination of expert human curation, advanced analytics and topic tags, for greater veracity.
  • Experienced – A partner with 45+ years of aggregating content, plus patents in clustering and machine learning, for dependable value.

1 “Big Data in the Financial Services Industry: 2018-2030–Opportunities, Challenges, Strategies & Forecasts,” SNS Telecom & IT. July 2018. Accessed at:

2 Novet, Jordan. ”Microsoft’s CEO meets with top execs every week to review AI projects,” CNBC. April 7, 2019. Accessed at:

3 “Nokia Chair Offers Insights into Artificial Intelligence,” LexisNexis. January 10, 2019. Accessed at:

4 Malik, Yogesh. “How to Set-up an Artificial Intelligence Center of Excellence in Your Organization,” The Startup. April 30, 2018. Accessed at:

5 “HSBC bets on big data with new recruitment drive,” LexisNexis. November 7, 2018. Accessed at:

6 “What’s Driving the Demand for Data Scientists?” Knowledge@Wharton. March 8, 2019. Accessed at:


8 “Nokia Chair Offers Insights into Artificial Intelligence,” LexisNexis. January 10, 2019. Accessed at:

9 “What’s Driving the Demand for Data Scientists?” Knowledge@Wharton. March 8, 2019. Accessed at:



12 Lardinois, Frederic. “Google launches an end-to-end AI platform,” TechCrunch. March 29, 2019. Accessed at:

13 “Form 10-K: Annual Report,” Microsoft Corporation. June 30, 2018. Accessed at:



Deutsche Unternehmen wünschen sich mehr Regulierung

47 Prozent der deutschen Unternehmen in Deutschland fordern höhere gesetzliche Anforderungen an die IT-Sicherheit in der Wirtschaft. Das hat eine repräsentative Ipsos-Umfrage im Auftrag des TÜV-Verbands unter 503 Unternehmen ab zehn Mitarbeitern ergeben. Befragt wurden IT-Sicherheitsverantwortliche, IT-Leiter und Mitglieder der Geschäftsleitung. Prozent stimmten der Aussage zu, dass Regulierung durch den Gesetzgeber wichtig ist und zu einer besseren IT-Sicherheit ihres Unternehmens beiträgt. „Die Unternehmen geben ein überraschend starkes Votum für eine stärkere gesetzliche Regulierung der IT-Sicherheit in der Wirtschaft ab“, sagte Dr. Michael Fübi, Präsident des TÜV-Verbands (VdTÜV), bei Vorstellung der „TÜV Cybersecurity Studie“ in Berlin. 

Die wichtigsten Gründe für den Wunsch nach strengeren staatlichen Vorgaben seien eigene Erfahrungen mit Cyberkriminalität und die digitale Transformation. 77 Prozent gaben an, dass die Bedeutung der IT-Sicherheit in den vergangenen fünf Jahren für sie gestiegen ist. Als Gründe für das Umdenken nennen 78 Prozent der Befragten die zunehmende Digitalisierung, 41 Prozent Berichte über immer neue Cyberangriffe und 29 Prozent einen IT-Sicherheitsvorfall im eigenen Unternehmen. „Sehr viele Unternehmen nehmen Cyberangriffe nicht mehr als abstrakte Gefahr wahr, sondern sind direkt betroffen“, sagte Fübi. „Mit dem geplanten IT-Sicherheitsgesetz 2.0 in Deutschland und dem Cybersecurity Act in der EU stehen Instrumente zur Verfügung, mit denen die Politik den Schutz vor Cyberangriffen in der Wirtschaft wirksam verbessern kann.“

13 Prozent der Unternehmen hatten in den zwölf Monaten vor der Befragung einen IT-Sicherheitsvorfall. 26 Prozent berichteten von Phishing-Angriffen, bei denen – in der Regel per E-Mail – Schadsoftware in die Organisation eingeschleust wird. An zweiter Stelle steht Ransomware (19 Prozent), mit deren Hilfe Cyberkriminelle die IT-Systeme einer Organisation lahmlegen und die Unternehmen dann erpressen. Ein weiteres weit verbreitetes Phänomen ist Social Engineering (9 Prozent). Mitarbeiter werden gezielt manipuliert, um sich Zugang zu den IT-Systemen des Unternehmens zu verschaffen. „Die Folgen sind Systemausfälle, eine geringere Produktivität und nicht zugängliche Dienste für Kunden – der Worstcase für jedes Unternehmen“, sagte Fübi. Die Vorfälle führten zu finanziellen Schäden, aber häufig auch zu einem Schaden für die Reputation des Unternehmens oder zu anderen Wettbewerbsnachteilen.


Mit Künstlicher Intelligenz gegen kriminelle Hacker


In den vergangenen 24 Monaten haben die Unternehmen zahlreiche Maßnahmen ergriffen, um die IT-Sicherheit zu verbessern. 71 Prozent lassen sich von externen Sicherheitsspezialisten beraten. 64 Prozent haben neue Software für IT-Sicherheit eingeführt und 60 Prozent Schulungen für die Belegschaft durchgeführt. 32 Prozent haben ihr Budget für IT-Sicherheit in den vergangenen zwei Jahren erhöht und 17 Prozent zusätzliche IT-Mitarbeiter für diesen Zweck eingestellt. Nur jedes vierte Unternehmen hat Notfallübungen durchgeführt.

Zwölf Prozent nutzen Künstliche Intelligenz für den eigenen Schutz. Unter den großen Unternehmen ab 250 Mitarbeitern sind es 38 Prozent. 90 Prozent verwenden KI, um Schad-Software oder Anomalien in Datenströmen (70 Prozent) zu erkennen. Eine weitere Anwendung sind moderne Authentifizierungsverfahren, zum Beispiel Gesichts- oder Spracherkennung (37 Prozent der KI-Nutzer). 29 Prozent stimmen der Aussage zu, dass sich ihr Unternehmen mit Hilfe von Künstlicher Intelligenz besser schützen kann. 63 Prozent sagen, dass KI in den Händen von Cyberkriminellen eine steigende Gefahr für die IT-Sicherheit ihres Unternehmens darstellt. Mit KI lassen sich Cyberangriffe zum Beispiel automatisieren und personalisieren.


Die Empfehlungen des TÜV-Verbandes

In dieser Situation gibt der TÜV folgende Empfehlungen ab:

Anwendungsbereich des IT-Sicherheitsgesetzes erweitern – KRITIS-Fokus aufgeben. Mindeststandards für die IT-Sicherheit sind in allen Wirtschaftsbereichen notwendig. Bisher betrifft das IT-Sicherheitsgesetz nur die Betreiber kritischer Infrastrukturen (KRITIS) und innerhalb dieser Gruppe nur wenige Unternehmen – deutschlandweit etwa 1.700. Bestimmte Branchen wie Entsorger, Fahrzeughersteller, Maschinenbauer oder die Chemieindustrie sind nicht erfasst.

Cybersecurity Act umsetzen: Produktsicherheit um IT-Sicherheit ergänzen.  Der Cybersecurity Act ist seit Juli 2019 in Kraft und schafft einen allgemeinen Rechtsrahmen für die IT-Sicherheit von vernetzten Produkten und Dienstleistungen. „Wir müssen den Produktsicherheitsbegriff in der EU neu definieren“, sagte Fübi. „In Zukunft muss neben der funktionalen Sicherheit auch die digitale Sicherheit fester Bestandteil eines Produkts sein.“ Nur dann sollte ein Produkt in Europa auf den Markt gebracht werden dürfen. Das funktioniere aber nur, wenn die Anforderungen an die IT-Sicherheit in den Richtlinien der einzelnen Produktgruppen konsequent festgeschrieben werden – für Maschinen, Spielzeuge, Medizinprodukte, Fahrzeuge und viele andere Produkte.

Künstliche Intelligenz nach Risikoklassen prüfen. Bei hoch automatisierten Fahrzeugen hängt die körperliche Unversehrtheit eines Menschen direkt von Systemen mit Künstlicher Intelligenz ab. In diesen Fällen müssen die Funktionen selbstlernender Algorithmen von externer Stelle überprüfbar sein. Dafür benötigen die Prüfer Zugang zur Software und den Daten der Systeme. Je nach Risikoklasse eines KI-Systems können unterschiedliche Anforderungen an die Sicherheit und die Prüfungen gestellt werden. Entsprechende Vorschläge hat auch die Datenethikkommission der Bundesregierung in ihrem Abschlussbericht aufgegriffen.


German guidance on GDPR fines

The German data protection authorities (‘DPAs’) have published their guidelines (in German) for calculating administrative fines under Article 83 of the GDPR.

The Guidelines are intended to guide enforcement action by German DPAs against business ‘undertakings’. They do not apply to individuals or associations who are not acting in a business capacity. Importantly the methodology set out in the Guidelines for calculating fines is not intended to be exhaustive and will be subject to further specification by the European Data Protection Board. Further, the Guidelines are not expected to be binding in cases of cross-border processing or for any non-German DPA.

See Konferenz der unabhängigen Datenschutzaufsichtsbehörden des Bundes und der Konzept der unabhängigen Datenschutzaufsichtsbehörden des Bundes und der Länder zur Bußgeldzumessung in Verfahren gegen Unternehmen –

Source; BIIA

Archiv & Touchpoint

Das Open Password Archiv Plus bündelt mehr als 1.100 Beiträge aus den Open Password Pushdiensten seit 2016.




Open Password FAQ

FAQ + Hilfe

Sie können über die Kopfzeile die einzelnen Jahre per Klick ansteuern. Hierzu einfach auf die passende Jahreszahl klicken (siehe Pfeile)

Open Password Navigation Hilfe

Nach dem Klick auf die jeweilige Jahreszahl gelangen sie auf die Titelliste des jeweiligen Jahres. Hier dann auf den gewünschten Titel klicken. (Siehe Pfeile)

Open Password Archiv Titelübersicht

Wir sind bemüht die Beiträge schnellstmöglich in das Archiv zu übernehmen. Falls ein Beitrag per Klick nicht erreichbar ist zählen wir diese Klicks aus. Sie können aber auf der jeweiligen Seite die beschleunigte Einbindung auswählen.

Die Beiträge von Open Password von 2016 bis 2022 sind seit dem 18.11 2023 über ChatGPT recherchierbar.

Was bedeutet dies konkret? Wir haben alle Beiträge als Wissensbasis über Chat GPT auswertbar gemacht. Wenn Sie ein ChatGPT Plus Nutzer sind und Zugriff auf ChatGPT 4.0 haben, so steht Ihnen das Open Password Archiv dort zur Verfügung.

Mit der Auswertung per KI Abfrage stehen ihnen unendliche Möglichkeiten zur Verfügung. Sie können Themen suchen, Fachbegriffe zusammenfassen und erläutern lassen. Hier geht es zum GPT des Open Password Archiv.

Wir haben alle Pushdienste (Beiträge) der letzten Jahre als PDF vorliegen. Hier sind entsprechende Abbildungen und Bilder enthalten. Die PDFs zu den Pushdiensten werden ebenfalls Schritt für als Link mit in die Beiträge angelegt.

Wir sind bemüht Beiträge nach denen einen hohe Nachfrage besteht in der Anlage vorzuziehen. Hierfür werten wir die Zugriffe auf die Titellisten und die Klicks auf Beiträge regelmässig aus.

Beitrage die als Material benötigt werden, aber noch nicht im Archiv eingebunden sind können wie folgt angefordert werden.

1. Klicken sie auf den Beitrag aus der Titelliste
2. Füllen sie das Formular aus und fragen sie eine bevorzugte Einbindung an

Wir ziehen die Einbindung vor und senden eine Benachrichtigung sobald der Beitrag eingebunden wurde.