apache spark - Use jupyter nbconvert to run pyspark job on standalone mode -
i've ever used jupyter notebook run on pyspark standalone mode.
it need set:
pyspark_driver_python_opts="notebook ..." pyspark_driver_python=ipython
and run
${spark_home}/bin/pyspark --master spark://ip:7077
we use jupyter our production solution, , think jupyter nbconvert excellent command scheduling or daily batch job.
currently, still cannot find way use jupyter nbconvert run pyspark standalone. hope can kindly provide experience this.
great thanks!