{ "cells": [ { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from flashtext import KeywordProcessor\n", "import pandas as pd\n", "from pathlib import Path\n", "from collections import defaultdict\n", "from IPython.display import display, HTML" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[WindowsPath('../data/v1/WikiCSSH_categories.csv'),\n", " WindowsPath('../data/v1/WikiCSSH_category2page.csv'),\n", " WindowsPath('../data/v1/WikiCSSH_category_links.csv'),\n", " WindowsPath('../data/v1/WikiCSSH_category_links_all.csv'),\n", " WindowsPath('../data/v1/Wikicssh_core_categories.csv'),\n", " WindowsPath('../data/v1/WikiCSSH_page2redirect.csv')]" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "wikicssh_path = Path(\"../data/v1\")\n", "wikicssh_files = list(wikicssh_path.glob(\"./*.csv\"))\n", "wikicssh_files" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Wall time: 20.4 s\n" ] } ], "source": [ "%%time\n", "page2cats = (\n", " pd.read_csv('../data/v1/WikiCSSH_category2page.csv')\n", " .groupby(\"page_title\")\n", " .cat_title\n", " .agg(lambda x: list(x))\n", " .to_dict()\n", ")" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
categorylevel
0Computer_science1
1Mathematics1
2Information_science1
3Computer_engineering1
4Statistics1
\n", "
" ], "text/plain": [ " category level\n", "0 Computer_science 1\n", "1 Mathematics 1\n", "2 Information_science 1\n", "3 Computer_engineering 1\n", "4 Statistics 1" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pd.read_csv(wikicssh_files[4]).head()" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "processor = KeywordProcessor()" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Wall time: 124 ms\n" ] } ], "source": [ "%%time\n", "# categories\n", "processor.add_keywords_from_dict(\n", " {\n", " f'Category:{k}': [f'{k.lower().replace(\"_\", \" \")}']\n", " for k in pd.read_csv(\"../data/v1/WikiCSSH_categories.csv\").category.values\n", " }\n", ")" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Wall time: 8.44 s\n" ] } ], "source": [ "%%time\n", "for row in pd.read_csv('../data/v1/WikiCSSH_page2redirect.csv').values:\n", " #print(row)\n", " #break\n", " if isinstance(row[-1], float):\n", " row[-1] = row[0]\n", " processor.add_keyword(row[-1].lower().replace(\"_\", \" \"), row[0])\n", "#df_redirects = pd.read_csv(wikicssh_files[4]) # redirects\n", "#df_redirects.head()\n", "\n" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "text = \"\"\"In the last decade, we experienced an urgent need for a flexible, context-sensitive, fine-grained, and machine-actionable representation of scholarly knowledge and corresponding infrastructures for knowledge curation, publishing and processing. Such technical infrastructures are becoming increasingly popular in representing scholarly knowledge as structured, interlinked, and semantically rich Scientific Knowledge Graphs (SKG). Knowledge graphs are large networks of entities and relationships, usually expressed in W3C standards such as OWL and RDF. SKGs focus on the scholarly domain and describe the actors (e.g., authors, organizations), the documents (e.g., publications, patents), and the research knowledge (e.g., research topics, tasks, technologies) in this space as well as their reciprocal relationships. These resources provide substantial benefits to researchers, companies, and policymakers by powering several data-driven services for navigating, analysing, and making sense of research dynamics. Some examples include Microsoft Academic Graph (MAG), Open Academic Graph (combining MAG and AMiner), ScholarlyData, PID Graph, Open Research Knowledge Graph, OpenCitations, and OpenAIRE research graph. Current challenges in this area include: i) the design of ontologies able to conceptualise scholarly knowledge, ii) (semi-)automatic extraction of entities and concepts, integration of information from heterogeneous sources, identification of duplicates, finding connections between entities, and iii) the development of new services using this data, that allow to explore this information, measure research impact and accelerate science. This workshop aims at bringing together researchers and practitioners from different fields (including, but not limited to, Digital Libraries, Information Extraction, Machine Learning, Semantic Web, Knowledge Engineering, Natural Language Processing, Scholarly Communication, and Bibliometrics) in order to explore innovative solutions and ideas for the production and consumption of Scientific Knowledge Graphs (SKGs).\"\"\"" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[('Experience', 23, 34),\n", " ('Granularity', 85, 97),\n", " ('Scholarly_method', 140, 149),\n", " ('Knowledge', 150, 159),\n", " ('Knowledge', 198, 207),\n", " ('Scholarly_method', 326, 335),\n", " ('Knowledge', 336, 345),\n", " ('Semantics', 378, 390),\n", " ('Knowledge', 407, 416),\n", " ('Category:Graphs', 417, 423),\n", " ('Knowledge', 431, 440),\n", " ('Category:Graphs', 441, 447),\n", " ('Entity', 470, 478),\n", " ('World_Wide_Web_Consortium', 519, 532),\n", " ('Scholarly_method', 572, 581),\n", " ('Document', 649, 658),\n", " ('Research', 698, 706),\n", " ('Knowledge', 707, 716),\n", " ('Research', 724, 732),\n", " ('Category:Space', 770, 775),\n", " ('Research', 867, 878),\n", " ('Business', 880, 889),\n", " ('Research', 996, 1004),\n", " ('CONFIG.SYS', 1029, 1036),\n", " ('Microsoft_Academic', 1037, 1055),\n", " ('Academy_(educational_institution)', 1074, 1082),\n", " ('Open_research', 1143, 1156),\n", " ('Ontology_(information_science)', 1157, 1172),\n", " ('Research', 1202, 1210),\n", " ('Category:Area', 1245, 1249),\n", " ('CONFIG.SYS', 1250, 1257),\n", " ('Category:Design', 1266, 1272),\n", " ('Ontology', 1276, 1286),\n", " ('Concept', 1295, 1308),\n", " ('Scholarly_method', 1309, 1318),\n", " ('Knowledge', 1319, 1328),\n", " ('2', 1330, 1332),\n", " ('Numeral_prefix', 1335, 1340),\n", " ('Entity', 1365, 1373),\n", " ('Concept', 1378, 1386),\n", " ('Category:Information', 1403, 1414),\n", " ('Homogeneity_and_heterogeneity', 1420, 1433),\n", " ('Category:Identification', 1443, 1457),\n", " ('Entity', 1501, 1509),\n", " ('Category:Information', 1596, 1607),\n", " ('Research', 1617, 1625),\n", " ('Acceleration', 1637, 1647),\n", " ('Research', 1697, 1708),\n", " ('Digital_library', 1781, 1798),\n", " ('Information_extraction', 1800, 1822),\n", " ('Machine_learning', 1824, 1840),\n", " ('Semantic_Web', 1842, 1854),\n", " ('Knowledge_engineering', 1856, 1877),\n", " ('Natural_language_processing', 1879, 1906),\n", " ('Scholarly_communication', 1908, 1931),\n", " ('Category:Bibliometrics', 1937, 1950),\n", " ('Innovation', 1972, 1982),\n", " ('Solution', 1983, 1992),\n", " ('Idea', 1997, 2002),\n", " ('Category:Consumption', 2026, 2037),\n", " ('Knowledge', 2052, 2061),\n", " ('Category:Graphs', 2062, 2068)]" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "processor.extract_keywords(text, span_info=True)" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "def get_html(text, processor):\n", " spans = processor.extract_keywords(text, span_info=True)\n", " prev = 0\n", " parts = []\n", " category_counts = defaultdict(int)\n", " for entity, start, end in spans:\n", " if entity.startswith(\"Category:\"):\n", " entity_cats = [entity.replace(\"Category:\", \"\")]\n", " else:\n", " entity_cats = [c for c in page2cats.get(entity, [])]\n", " for cat in entity_cats:\n", " category_counts[cat] += 1\n", " if start > prev:\n", " parts.append(text[prev:start])\n", " parts.append(f\"{text[start:end]}\")\n", " prev = end\n", " tagged_doc = \"\".join(parts).replace(\"\\n\", \"
\")\n", " pred_categories = \" | \".join([\n", " f\"{k} ({v})\"\n", " for k,v in sorted(category_counts.items(), key=lambda x: x[1], reverse=True)\n", " ])\n", " final_div = f\"\"\"
\n", "
\n", "

Tagged document:

\n", " {tagged_doc}\n", "
\n", "
\n", "

Predicted categories:

\n", " {pred_categories}\n", "
\n", "
\"\"\"\n", " return HTML(final_div)\n", " " ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "
\n", "

Tagged document:

\n", " In the last decade, we experienced an urgent need for a flexible, context-sensitive, fine-grained, and machine-actionable representation of scholarly knowledge and corresponding infrastructures for knowledge curation, publishing and processing. Such technical infrastructures are becoming increasingly popular in representing scholarly knowledge as structured, interlinked, and semantically rich Scientific Knowledge Graphs (SKG). Knowledge graphs are large networks of entities and relationships, usually expressed in W3C standards such as OWL and RDF. SKGs focus on the scholarly domain and describe the actors (e.g., authors, organizations), the documents (e.g., publications, patents), and the research knowledge (e.g., research topics, tasks, technologies) in this space as well as their reciprocal relationships. These resources provide substantial benefits to researchers, companies, and policymakers by powering several data-driven services for navigating, analysing, and making sense of research dynamics. Some examples include Microsoft Academic Graph (MAG), Open Academic Graph (combining MAG and AMiner), ScholarlyData, PID Graph, Open Research Knowledge Graph, OpenCitations, and OpenAIRE research graph. Current challenges in this area include: i) the design of ontologies able to conceptualise scholarly knowledge, ii) (semi-)automatic extraction of entities and concepts, integration of information from heterogeneous sources, identification of duplicates, finding connections between entities, and iii) the development of new services using this data, that allow to explore this information, measure research impact and accelerate science. This workshop aims at bringing together researchers and practitioners from different fields (including, but not limited to, Digital Libraries, Information Extraction, Machine Learning, Semantic Web, Knowledge Engineering, Natural Language Processing, Scholarly Communication, and Bibliometrics) in order to explore innovative solutions and ideas for the production and consumption of Scientific Knowledge Graphs\n", "
\n", "
\n", "

Predicted categories:

\n", " Knowledge (15) | Research (8) | Research_methods (7) | Academia (6) | Methodology (4) | Ontology (4) | Graphs (3) | Data_modeling_diagrams (3) | Knowledge_engineering (3) | Semantic_Web (3) | Concepts (3) | Meaning_(philosophy_of_language) (2) | Web_services (2) | Information_science (2) | Configuration_files (2) | Ontology_(information_science) (2) | Design (2) | Abstraction (2) | Mental_content (2) | Information (2) | Library_science (2) | Artificial_intelligence (2) | Natural_language_processing (2) | Perception (1) | Statistical_mechanics (1) | Web_development (1) | Space (1) | Entrepreneurship (1) | Database_stubs (1) | Online_databases (1) | Open_content (1) | Open_science (1) | Collaboration (1) | Knowledge_representation (1) | Knowledge_bases (1) | Area (1) | Integers (1) | Numeral_systems (1) | Chemical_reactions (1) | Identification (1) | Acceleration (1) | Machine_learning (1) | Cybernetics (1) | Learning (1) | Internet_ages (1) | Emerging_technologies (1) | Computational_fields_of_study (1) | Speech_recognition (1) | Computational_linguistics (1) | Bibliometrics (1) | Innovation (1) | Dosage_forms (1) | Alchemical_processes (1) | Solutions (1) | Creativity (1) | Consumption (1)\n", "
\n", "
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "display(get_html(text, processor))" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "text = \"\"\"Methods for extracting entities (methods, research topics, technologies, tasks, materials, metrics, research contributions) and relationships from research publications\n", "Methods for extracting metadata about authors, documents, datasets, grants, affiliations and others.\n", "Data models (e.g., ontologies, vocabularies, schemas) for the description of scholarly data and the linking between scholarly data/software and academic papers that report or cite them\n", "Description of citations for scholarly articles, data and software and their interrelationships\n", "Applications for the (semi-)automatic annotation of scholarly papers\n", "Theoretical models describing the rhetorical and argumentative structure of scholarly papers and their application in practice\n", "Methods for quality assessment of scientific knowledge graphs\n", "Description and use of provenance information of scholarly data\n", "Methods for the exploration, retrieval and visualization of scientific knowledge graphs\n", "Pattern discovery of scholarly data\n", "Scientific claims identification from textual contents\n", "Automatic or semi-automatic approaches to making sense of research dynamics\n", "Content- and data-based analysis on scholarly papers\n", "Automatic semantic enhancement of existing scholarly libraries and papers\n", "Reconstruction, forecasting and monitoring of scholarly data\n", "Novel user interfaces for interaction with paper, metadata, content, software and data\n", "Visualisation of related papers or data according to multiple dimensions (semantic similarity of abstracts, keywords, etc.)\n", "Applications for making sense of scholarly data\"\"\"" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "
\n", "

Tagged document:

\n", " Methods for extracting entities (methods, research topics, technologies, tasks, materials, metrics, research contributions) and relationships from research publications
Methods for extracting metadata about authors, documents, datasets, grants, affiliations and others.
Data models (e.g., ontologies, vocabularies, schemas) for the description of scholarly data and the linking between scholarly data/software and academic papers that report or cite them
Description of citations for scholarly articles, data and software and their interrelationships
Applications for the (semi-)automatic annotation of scholarly papers
Theoretical models describing the rhetorical and argumentative structure of scholarly papers and their application in practice
Methods for quality assessment of scientific knowledge graphs
Description and use of provenance information of scholarly data
Methods for the exploration, retrieval and visualization of scientific knowledge graphs
Pattern discovery of scholarly data
Scientific claims identification from textual contents
Automatic or semi-automatic approaches to making sense of research dynamics
Content- and data-based analysis on scholarly papers
Automatic semantic enhancement of existing scholarly libraries and papers
Reconstruction, forecasting and monitoring of scholarly data
Novel user interfaces for interaction with paper, metadata, content, software and data
Visualisation of related papers or data according to multiple dimensions (semantic similarity of abstracts, keywords, etc.)
Applications for making sense of scholarly\n", "
\n", "
\n", "

Predicted categories:

\n", " Academia (12) | Methodology (11) | Knowledge (6) | Research_methods (4) | Research (4) | Meaning_(philosophy_of_language) (3) | Computer_science (3) | Software (3) | Metadata (2) | Ontology (2) | Graphs (2) | Data_modeling_diagrams (1) | Metrics (1) | Information_science (1) | Data_modeling (1) | Lexicography (1) | Vocabulary (1) | Numeral_systems (1) | Inductive_reasoning (1) | Abstraction (1) | Theories (1) | Critical_thinking_skills (1) | Quality_assurance (1) | Information (1) | Identification (1) | Structuralism (1) | Analysis (1) | Library_science (1) | Forecasting (1) | Human-machine_interaction (1) | Virtual_reality (1) | User_interfaces (1) | User_interface_techniques (1) | Papermaking (1) | Packaging_materials (1) | Printing_materials (1) | Mathematical_concepts (1) | Abstract_algebra (1) | Geometric_measurement (1) | Dimension (1) | Mathematical_notation (1) | Punctuation (1)\n", "
\n", "
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "display(get_html(text, processor))" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "text=\"\"\"One of the most common AI techniques used for processing big data is machine learning, a self-adaptive algorithm that gets increasingly better analysis and patterns with experience or with newly added data.\n", "\n", "If a digital payments company wanted to detect the occurrence or potential for fraud in its system, it could employ machine learning tools for this purpose. The computational algorithm built into a computer model will process all transactions happening on the digital platform, find patterns in the data set, and point out any anomaly detected by the pattern.\n", "\n", "Deep learning, a subset of machine learning, utilizes a hierarchical level of artificial neural networks to carry out the process of machine learning. The artificial neural networks are built like the human brain, with neuron nodes connected together like a web. While traditional programs build analysis with data in a linear way, the hierarchical function of deep learning systems enables machines to process data with a nonlinear approach.\"\"\"" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "
\n", "

Tagged document:

\n", " One of the most common AI techniques used for processing big data is machine learning, a self-adaptive algorithm that gets increasingly better analysis and patterns with experience or with newly added data.

If a digital payments company wanted to detect the occurrence or potential for fraud in its system, it could employ machine learning tools for this purpose. The computational algorithm built into a computer model will process all transactions happening on the digital platform, find patterns in the data set, and point out any anomaly detected by the pattern.

Deep learning, a subset of machine learning, utilizes a hierarchical level of artificial neural networks to carry out the process of machine learning. The artificial neural networks are built like the human brain, with neuron nodes connected together like a web. While traditional programs build analysis with data in a linear way, the hierarchical function of deep learning systems enables machines to process data with a nonlinear\n", "
\n", "
\n", "

Predicted categories:

\n", " Cybernetics (5) | Computational_neuroscience (4) | Machine_learning (4) | Learning (4) | Patterns (4) | Computational_fields_of_study (2) | Analysis (2) | Design (2) | Deep_learning (2) | Hierarchy (2) | Computational_statistics (2) | Artificial_neural_networks (2) | Classification_algorithms (2) | Mathematical_psychology (2) | Integers (1) | Unsolved_problems_in_computer_science (1) | Artificial_intelligence (1) | Emerging_technologies (1) | Techniques (1) | Data_management (1) | Transaction_processing (1) | Distributed_computing_problems (1) | Big_data (1) | Software_engineering_stubs (1) | Algorithms (1) | Transducers (1) | Sensors (1) | Measuring_instruments (1) | Knowledge_representation (1) | Metalogic (1) | Abstraction (1) | Information-theoretically_secure_algorithms (1) | Employment (1) | Theoretical_computer_science (1) | Computability_theory (1) | Virtual_reality (1) | Computational_science (1) | Simulation_software (1) | Scientific_modeling (1) | Mathematical_objects (1) | Set_theory (1) | Basic_concepts_in_set_theory (1) | Brain (1) | Broadband (1) | Wireless_networking (1) | Elementary_algebra (1) | Dynamical_systems (1) | Nonlinear_systems (1)\n", "
\n", "
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "display(get_html(text, processor))" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "text=\"\"\"Commonsense knowledge graph reasoning(CKGR) is the task of predicting a missing entity given one existing and the relation in a commonsense knowledge graph (CKG). Existing methods can be classified into two categories generation method and selection method. Each method has its own advantage. We theoretically and empirically compare the two methods, finding the selection method is more suitable than the generation method in CKGR. Given the observation, we further combine the structure of neural Text Encoder and Knowledge Graph Embedding models to solve the selection method's two problems, achieving competitive results. We provide a basic framework and baseline model for subsequent CKGR tasks by selection methods.\"\"\"" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "
\n", "

Tagged document:

\n", " Commonsense knowledge graph reasoning(CKGR) is the task of predicting a missing entity given one existing and the relation in a commonsense knowledge graph (CKG). Existing methods can be classified into two categories generation method and selection method. Each method has its own advantage. We theoretically and empirically compare the two methods, finding the selection method is more suitable than the generation method in CKGR. Given the observation, we further combine the structure of neural Text Encoder and Knowledge Graph Embedding models to solve the selection method's two problems\n", "
\n", "
\n", "

Predicted categories:

\n", " Knowledge_bases (3) | Integers (3) | Ontology (2) | Reasoning (1) | Belief (1) | Epistemology (1) | Ion_channels (1) | Futurology (1) | Biological_databases (1) | Nervous_system (1) | Information_science (1) | Knowledge_engineering (1) | Knowledge_representation (1) | Ontology_(information_science) (1) | Semantic_Web (1) | Automata_(computation) (1) | Pattern_matching (1) | Formal_languages (1) | Programming_constructs (1) | Regular_expressions (1)\n", "
\n", "
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "display(get_html(text, processor))" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [], "source": [ "text=\"\"\"We introduce several measures of novelty for a scientific article in MEDLINE based on the temporal profiles of its assigned Medical Subject Headings (MeSH). First, temporal profiles for all MeSH terms (and pairs of MeSH terms) were characterized empirically and modelled as logistic growth curves. Second, a paper's novelty is captured by its youngest MeSH (and pairs of MeSH) as measured in years and volume of prior work. Across all papers in MEDLINE published since 1985, we find that individual concept novelty is rare (2.7% of papers have a MeSH ≤ 3 years old; 1.0% have a MeSH ≤ 20 papers old), while combinatorial novelty is the norm (68% have a pair of MeSH ≤ 3 years old; 90% have a pair of MeSH ≤ 10 papers old). Furthermore, these novelty measures exhibit complex correlations with article impact (as measured by citations received) and authors' professional age.\"\"\"" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "
\n", "

Tagged document:

\n", " We introduce several measures of novelty for a scientific article in MEDLINE based on the temporal profiles of its assigned Medical Subject Headings (MeSH). First, temporal profiles for all MeSH terms (and pairs of MeSH terms) were characterized empirically and modelled as logistic growth curves. Second, a paper's novelty is captured by its youngest MeSH (and pairs of MeSH) as measured in years and volume of prior work. Across all papers in MEDLINE published since 1985, we find that individual concept novelty is rare (2.7% of papers have a MeSH ≤ 3 years old; 1.0% have a MeSH ≤ 20 papers old), while combinatorial novelty is the norm (68% have a pair of MeSH ≤ 3 years old; 90% have a pair of MeSH ≤ 10 papers old). Furthermore, these novelty measures exhibit complex correlations with article impact (as measured\n", "
\n", "
\n", "

Predicted categories:

\n", " Biological_databases (11) | Thesauri (9) | Library_cataloging_and_classification (9) | Medical_classification (9) | Innovation (5) | Bibliographic_databases_and_indexes (2) | Medical_databases (2) | Online_databases (2) | Interpersonal_relationships (2) | Interpersonal_communication (2) | Relationship_counseling (2) | Family_therapy (2) | Curves (2) | Accuracy_and_precision (2) | Measurement (2) | Metrology (2) | Information_science (1) | Robotics (1) | Quantification (1) | Population_ecology (1) | Special_functions (1) | Differential_equations (1) | Papermaking (1) | Packaging_materials (1) | Printing_materials (1) | Volume (1) | Mathematical_constants (1) | Transcendental_numbers (1) | Real_transcendental_numbers (1) | Integers (1) | Combinatorics (1) | Radioactivity (1) | Conjugate_prior_distributions (1) | Continuous_distributions (1) | Location-scale_family_probability_distributions (1) | Stable_distributions (1) | Normal_distribution (1) | Complex_systems_theory (1) | Covariance_and_correlation (1) | Dimensionless_numbers (1)\n", "
\n", "
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "display(get_html(text, processor))" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "text=\"\"\"Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks.\n", "If you are just starting out in the field of deep learning or you had some experience with neural networks some time ago, you may be confused. I know I was confused initially and so were many of my colleagues and friends who learned and used neural networks in the 1990s and early 2000s.\n", "The leaders and experts in the field have ideas of what deep learning is and these specific and nuanced perspectives shed a lot of light on what deep learning is all about.\n", "In this post, you will discover exactly what deep learning is by hearing from a range of experts and leaders in the field.\"\"\"" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "
\n", "

Tagged document:

\n", " Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks.
If you are just starting out in the field of deep learning or you had some experience with neural networks some time ago, you may be confused. I know I was confused initially and so were many of my colleagues and friends who learned and used neural networks in the 1990s and early 2000s.
The leaders and experts in the field have ideas of what deep learning is and these specific and nuanced perspectives shed a lot of light on what deep learning is all about.
In this post, you will discover exactly what deep learning is by hearing from a range of experts\n", "
\n", " \n", "
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "display(get_html(text, processor))" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "181070" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "len(page2cats)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Interactive usage" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "from ipywidgets import interact_manual, widgets, Layout" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "5cd74bf42aa64e469f7c5df70eec3f39", "version_major": 2, "version_minor": 0 }, "text/plain": [ "interactive(children=(Textarea(value='Deep Learning is a subfield of machine learning concerned with algorithm…" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "text_area_widget = widgets.Textarea(\n", " value=text,\n", " placeholder=\"Type your text hear\",\n", " description='String:',\n", " disabled=False,\n", " layout=Layout(width=\"90%\")\n", ")\n", "text_area_widget.rows=10;\n", "interact_manual(lambda text: get_html(text, processor), text=text_area_widget);" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" } }, "nbformat": 4, "nbformat_minor": 4 }