pyspark dropna in one column
ou can use Column.isNull / Column.isNotNull:
df.where(col("dt_mvmt").isNull())
df.where(col("dt_mvmt").isNotNull())
If you want to simply drop NULL values you can use na.drop with subset argument:
df.na.drop(subset=["dt_mvmt"])