apache spark - Use jupyter nbconvert to run pyspark job on standalone mode -


i've ever used jupyter notebook run on pyspark standalone mode.

it need set:

pyspark_driver_python_opts="notebook ..." pyspark_driver_python=ipython 

and run

${spark_home}/bin/pyspark --master spark://ip:7077 

we use jupyter our production solution, , think jupyter nbconvert excellent command scheduling or daily batch job.

currently, still cannot find way use jupyter nbconvert run pyspark standalone. hope can kindly provide experience this.

great thanks!


Popular posts from this blog

php - How should I create my API for mobile applications (Needs Authentication) -

5 Reasons to Blog Anonymously (and 5 Reasons Not To)

Google AdWords and AdSense - A Dynamic Small Business Marketing Duo