Skip to content

Job cancelled because SparkContext was shut down #15

@dalloliogm

Description

@dalloliogm

Thank you very much for developing this package, it looks very interesting.

I can reproduce the examples in the documentation, however when I try the function on real data I get the following spark error:

Error: org.apache.spark.SparkException: Job 1 cancelled because SparkContext was shut down

The table I am trying to plot is quite big - but even after trying head %>% compute or head %>% coalesce, the error remain.

Do you have any suggestion on how to debug this, or which parameters could be adjusted?

Thanks

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions