Validated 70-776 Free Practice Questions 2019

70-776 Royal Pack Testengine pdf

100% Actual & Verified — 100% PASS

Unlimited access to the world's largest Dumps library! Try it Free Today!

https://www.exambible.com/70-776-exam/

Product Description:
Exam Number/Code: 70-776
Exam name: Perform Big Data Engineering on Microsoft Cloud Services (beta)
n questions with full explanations
Certification: Microsoft Certification
Last updated on Global synchronizing

Free Certification Real IT 70-776 Exam pdf Collection

Your success in 70-776 Braindumps is our sole target and we develop all our 70-776 Free Practice Questions in a way that facilitates the attainment of this target. Not only is our 70-776 Exam Dumps material the best you can find, it is also the most detailed and the most updated. 70-776 Study Guides for Microsoft 70-776 are written to the highest standards of technical accuracy.

Check 70-776 free dumps before getting the full version:

NEW QUESTION 1
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You are troubleshooting a slice in Microsoft Azure Data Factory for a dataset that has been in a waiting state for the last three days. The dataset should have been ready two days ago.
The dataset is being produced outside the scope of Azure Data Factory. The dataset is defined by using the following JSON code.
70-776 dumps exhibit
You need to modify the JSON code to ensure that the dataset is marked as ready whenever there is data in the data store.
Solution: You add a structure property to the dataset.
Does this meet the goal?

  • A. Yes
  • B. No

Answer: B

Explanation:
References:
https://docs.microsoft.com/en-us/azure/data-factory/v1/data-factory-create-datasets

NEW QUESTION 2
You plan to deploy a Microsoft Azure virtual machine that will a host data warehouse. The data warehouse will contain a 10-TB database.
You need to provide the fastest read and writes times for the database. Which disk configuration should you use?

  • A. storage pools with mirrored disks
  • B. RAID 5 volumes
  • C. spanned volumes
  • D. stripped volumes
  • E. storage pools with striped disks

Answer: E

NEW QUESTION 3
You are building a Microsoft Azure Stream Analytics job definition that includes inputs, queries, and outputs.
You need to create a job that automatically provides the highest level of parallelism to the compute instances.
What should you do?

  • A. Configure event hubs and blobs to use the PartitionKey field as the partition ID.
  • B. Set the partition key for the inputs, queries, and outputs to use the same partition folder
  • C. Configure the queries to use uniform partition keys.
  • D. Set the partition key for the inputs, queries, and outputs to use the same partition folder
  • E. Configure the queries to use different partition keys.
  • F. Define the number of input partitions to equal the number of output partitions.

Answer: A

Explanation:
References:
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-parallelization

NEW QUESTION 4
DRAG DROP
You need to design a Microsoft Azure solution to analyze text from a Twitter data stream. The solution must identify a sentiment score of positive, negative, or neutral for the tweets.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
70-776 dumps exhibit

    Answer:

    Explanation: 70-776 dumps exhibit

    NEW QUESTION 5
    Note: This question is part of a series of questions that present the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
    Start of repeated scenario
    You are migrating an existing on-premises data warehouse named LocalDW to Microsoft Azure. You will use an Azure SQL data warehouse named AzureDW for data storage and an Azure Data Factory named AzureDF for extract, transformation, and load (ETL) functions.
    For each table in LocalDW, you create a table in AzureDW.
    On the on-premises network, you have a Data Management Gateway.
    Some source data is stored in Azure Blob storage. Some source data is stored on an on-premises Microsoft SQL Server instance. The instance has a table named Table1.
    After data is processed by using AzureDF, the data must be archived and accessible forever. The archived data must meet a Service Level Agreement (SLA) for availability of 99 percent. If an Azure region fails, the archived data must be available for reading always. End of repeated scenario.
    You need to connect AzureDF to the storage account. What should you create?

    • A. a gateway
    • B. a dataset
    • C. a linked service
    • D. a pipeline

    Answer: C

    Explanation:
    References:
    https://docs.microsoft.com/en-us/azure/data-factory/v1/data-factory-azure-blob-connector

    NEW QUESTION 6
    HOTSPOT
    You have a Microsoft Azure Data Lake Analytics service.
    You have a file named Employee.tsv that contains data on employees. Employee.tsv contains seven columns named EmpId, Start, FirstName, LastName, Age, Department, and Title.
    You need to create a Data Lake Analytics jobs to transform Employee.tsv, define a schema for the data, and output the data to a CSV file. The outputted data must contain only employees who are in the sales department. The Age column must allow NULL.
    How should you complete the U-SQL code segment? To answer, select the appropriate options in the answer area.
    NOTE: Each correct selection is worth one point.
    70-776 dumps exhibit

      Answer:

      Explanation:
      References:
      https://docs.microsoft.com/en-us/azure/data-lake-analytics/data-lake-analytics-u-sql-get-started

      NEW QUESTION 7
      HOTSPOT
      You are creating a series of activities for a Microsoft Azure Data Factory. The first activity will copy an input dataset named Dataset1 to an output dataset named Dataset2. The second activity will copy a dataset named Dataset3 to an output dataset named Dataset4.
      Dataset1 is located in Azure Table Storage. Dataset2 is located in Azure Blob storage. Dataset3 is located in an Azure Data Lake store. Dataset4 is located in an Azure SQL data warehouse.
      You need to configure the inputs for the second activity. The solution must ensure that Dataset3 is copied after Dataset2 is created.
      How should you complete the JSON code for the second activity? To answer, select the appropriate options in the answer area.
      NOTE: Each correct selection is worth one point.
      70-776 dumps exhibit

        Answer:

        Explanation:
        References:
        https://github.com/aelij/azure-content/blob/master/articles/data-factory/data-factory-create- pipelines.md

        NEW QUESTION 8
        HOTSPOT
        You plan to implement a Microsoft Azure Stream Analytics job to track the data from IoT devices. You will have the following two jobs:
        - Job1 will contain a query that has one non-partitioned step.
        - Job2 will contain a query that has two steps. One of the steps is partitioned.
        What is the maximum number of streaming units that will be consumed per job? To answer, select the appropriate options in the answer area.
        NOTE: Each correct selection is worth one point.
        70-776 dumps exhibit

          Answer:

          Explanation:
          References:
          https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-scale-jobs https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-streaming-unit- consumption

          NEW QUESTION 9
          DRAG DROP
          You are building a data pipeline that uses Microsoft Azure Stream Analytics.
          Alerts are generated when the aggregate of data streaming in from devices during a minute-long window matches the values in a rule.
          You need to retrieve the following information:
          *The event ID
          *The device ID
          *The application ID that runs the service
          Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
          70-776 dumps exhibit

            Answer:

            Explanation: 70-776 dumps exhibit

            NEW QUESTION 10
            You need to define an input dataset for a Microsoft Azure Data Factory pipeline.
            Which properties should you include when you define the dataset?

            • A. name, type, typeProperties, and availability
            • B. name, typeProperties, structure, and availability
            • C. name, policy, structure, and external
            • D. name, type, policy, and structure

            Answer: A

            Explanation:
            References:
            https://docs.microsoft.com/en-us/azure/data-factory/v1/data-factory-create-datasets

            NEW QUESTION 11
            You use Microsoft Azure Data Lake Store as the default storage for an Azure HDInsight cluster.
            You establish an SSH connection to the HDInsight cluster.
            You need to copy files from the HDInsight cluster to the Data LakeStore. Which command should you use?

            • A. AzCopy
            • B. hdfs dfs
            • C. hadoop fs
            • D. AdlCopy

            Answer: D

            NEW QUESTION 12
            DRAG DROP
            You have a Microsoft Azure Stream Analytics solution that captures website visits and user interactions on the website.
            You have the sample input data described in the following table.
            70-776 dumps exhibit
            You have the sample output described in the following table.
            70-776 dumps exhibit
            How should you complete the script? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
            NOTE: Each correct selection is worth one point.
            70-776 dumps exhibit

              Answer:

              Explanation:
              References:
              https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-stream-analytics-query- patterns

              NEW QUESTION 13
              DRAG DROP
              You need to copy data from Microsoft Azure SQL Database to Azure Data Lake Store by using Azure Data Factory.
              Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
              70-776 dumps exhibit

                Answer:

                Explanation:
                References:
                https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-overview

                NEW QUESTION 14
                You have a Microsoft Azure SQL data warehouse. You have an Azure Data Lake Store that contains data from ORC, RC, Parquet, and delimited text files.
                You need to load the data to the data warehouse in the least amount of time possible. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

                • A. Use Microsoft SQL Server Integration Services (SSIS) to enumerate from the Data Lake Store by using a for loop.
                • B. Use AzCopy to export the files from the Data Lake Store to Azure Blob storage.
                • C. For each file in the loop, export the data to Parallel Data Warehouse by using a Microsoft SQL Server Native Client destination.
                • D. Load the data by executing the CREATE TABLE AS SELECT statement.
                • E. Use bcp to import the files.
                • F. In the data warehouse, configure external tables and external file formats that correspond to the Data Lake Store.

                Answer: DF

                Explanation:
                References:
                https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-load-from-azure-data-lake-store

                NEW QUESTION 15
                You have a Microsoft Azure Data Lake Analytics service.
                You need to write a U-SQL query to extract from a CSV file all the users who live in Boston, and then to save the results in a new CSV file.
                Which U-SQL script should you use?
                70-776 dumps exhibit
                70-776 dumps exhibit
                70-776 dumps exhibit
                70-776 dumps exhibit

                • A. Option A
                • B. Option B
                • C. Option C
                • D. Option D

                Answer: A

                NEW QUESTION 16
                DRAG DROP
                You plan to create for an alert for a Microsoft Azure Data Factory pipeline.
                You need to configure the alert to trigger when the total number of failed runs exceeds five within a three-hour period.
                How should you configure the window size and the threshold in the JSON file? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
                NOTE: Each correct selection is worth one point.
                70-776 dumps exhibit

                  Answer:

                  Explanation:
                  References:
                  https://docs.microsoft.com/en-us/azure/data-factory/v1/data-factory-monitor-manage-pipelines?view=powerbiapi-1.1.10

                  NEW QUESTION 17
                  You have a Microsoft Azure SQL data warehouse that contains information about community events. An Azure Data Factory job writes an updated CSV file in Azure Blob storage to Community/{date}/events.csv daily.
                  You plan to consume a Twitter feed by using Azure Stream Analytics and to correlate the feed to the community events.
                  You plan to use Stream Analytics to retrieve the latest community events data and to correlate the data to the Twitter feed data.
                  You need to ensure that when updates to the community events data is written to the CSV files, the Stream Analytics job can access the latest community events data.
                  What should you configure?

                  • A. an output that uses a blob storage sink and has a path pattern of Community/{date}
                  • B. an output that uses an event hub sink and the CSV event serialization format
                  • C. an input that uses a reference data source and has a path pattern of Community/{date}/events.csv
                  • D. an input that uses a reference data source and has a path pattern of Community/{date}

                  Answer: C

                  P.S. Easily pass 70-776 Exam with 91 Q&As Surepassexam Dumps & pdf Version, Welcome to Download the Newest Surepassexam 70-776 Dumps: https://www.surepassexam.com/70-776-exam-dumps.html (91 New Questions)