Course HighlightsCOURSE
Spark Fundamentals I

Spark Fundamentals I

Learn the fundamentals of Spark. Gain hands-on experience through online labs using Hadoop, Mesos, and more.

Build your knowledge of this important tool and take a big step forward in your data science career.

Spark Fundamentals I Highlights

  Course duration

Duration

  • 2 weeks
    2-3 hours/week
  Course Fee

Fee

US$ 99 - US$ 199

Course duration

Duration

  • 2 weeks
    2-3 hours/week
Course Fee

Fee

US$ 99 - US$ 199

Apache Spark is a unified analytics engine utilized in big data analysis and machine learning. It is used to discover trends and real-time insights in many industries, including financial services, healthcare, manufacturing, and retail. This makes it an important competence to have for individuals keen to pursue a career in data science.

During this course, you will explore the fundamentals of Spark and become familiar with various core Spark tools. You will discover why and when Spark is used. You will explore the components of the Spark unified stack. You will learn the fundamentals of Spark's principal data abstraction, the Resilient Distributed Dataset. You will learn how to download and install Spark standalone. Plus, you will be introduced to Scala and Python.

Apache Spark is a popular general-purpose processor that is ideal for working with big data. If you are keen to build your experience through hands-on lab sessions, then this Spark Fundamentals course in an ideal step to take.

This course comprises five purposely designed modules that take you on a carefully defined learning journey.

It is a self-paced course, which does not run to a fixed schedule with regard to completing modules or submitting assignments. It is anticipated that if you work 2-3 hours per week, you will complete the course within 2-3 weeks. However, you can work at your own pace as long as the course is completed before the deadline.

The materials for each module are accessible from the start of the course and will remain available for the duration of your enrollment. Methods of learning and assessment will include reading material, hands-on labs and online exams questions.

As part of our mentoring service you will have access to valuable guidance and support throughout the course. We provide a dedicated discussion space where you can ask questions, chat with your peers, and resolve issues. Depending on the payment plan you have chosen, you may also have access to live classes and webinars, which are an excellent opportunity to discuss problems with your mentor and ask questions. Mentoring services may vary package wise.

Once you have successfully completed the course, you will earn your IBM Certificate.

After completing this course, you will be able to:

  • Perform fast iterative algorithms.
  • Carry out interactive data mining.
  • Perform in-memory cluster computing.
  • Support Java, Python, R, and Scala APIs for development.
  • Combine SQL, streaming, and complex analytics in the same application.
  • Run Spark applications on top of Hadoop, Mesos, standalone, or in the cloud.
  • Work with HDFS, Cassandra, HBase, or S3.
  • Individuals who need to understand data and data insights for their job.
  • Individuals who aspire to become data scientists or data engineers.

You should have a basic understanding of:

  • Apache Hadoop and big data.
  • The Linux operating system.
  • Scala, Python, R, or Java programming languages.

 

Course Outline

General Information
Learning objectives
Syllabus
Grading Scheme
Change Log
Copyright and Trademarks
Learning objectives
Resilient Distributed Dataset - Part 1
Resilient Distributed Dataset - Part 2
Resilient Distributed Dataset - Part 3
Lab - RDD and Dataframes
Python RDD Solution
Scala RDD Solution
DataFrames Solution
Graded Review Questions
Learning objectives
Spark Libraries - Part 1
Spark Libraries - Part 2
Spark Libraries - Part 3
Lab - Scala Libraries
Solution - Part 1
Solution - Part 2
Solution - Part 3
Solution - Part 4
Graded Review Questions
Learning objectives
Configuration, monitoring, and tuning - Part 1
Configuration, monitoring, and tuning - Part 2
Lab - Spark Fundamentals
Solution
Graded Review Questions
Course Certificate

Earn your certificate

Once you have completed this course, you will earn your certificate.

Preview digital certificate
Spark Fundamentals I

FAQs

Spark Fundamentals I provided 100% online. You will therefore need access to the internet to be able to use the course materials. When you enroll for this course, you be able to access the course materials from the course link in your dashboard immediately. Please note, this course has been designed to be taken with Spark Fundamentals II, we therefore recommend that you complete this course and then enroll for Spark Fundamentals II when you are ready. This will ensure you have covered the required topics for this subject.

Spark Fundamentals I is intended to enable you to develop critical Spark skills, including distributed datasets and DataFrame operations. You will use Scala, Java, and Python to create and run a Spark application. Plus, you will create applications using Spark SQL, and configure and tune Spark. We therefore recommend that you have a basic understanding of Apache Hadoop and big data, basic knowledge of Linux, and basic skills in using Scala, Python, and Java programming languages.

Yes, once you have successfully completed the course, you will earn a Certificate of Completion. However, remember you will also have gained valuable skills that you can refer to in interviews and in your profile on LinkedIn!.

Yes. Spark Fundamentals I is totally online. You do not need to turn up to any classes in person. This means, however, that you need to have access to the internet, and also the necessary technology to access the course materials.

The great thing is that this means you can take this course wherever you live. And though you’ll be sitting in your room alone, you won’t be learning alone, for you will be encouraged to communicate and chat with your peers through the discussion space.

Apache Spark is a fantastic data processing framework that can process large datasets quickly. It can also distribute processing tasks across many computers. Having the capacity to do both these things makes Apache Spark an important tool for processing big data and developing machine learning. It also has an API that is easy to use and can reduce the burden on developers. It’s therefore a great skillset to have on your resumé and LinkedIn profile.