Initial Impressions:
Stable Diffusion is a branch of Disco Diffusion developed by Stability AI, CompVis Group LMU Munich, and Runway.

Similar to the Disco Diffusion model, the Stable Diffusion model is publicly available, and can be made to run locally using opensource repositories. The fact that the model is openly available, and can run on most consumer hardware, allowed its capabilities to be greatly expanded by the community.

These capabilities include:

Text-to-Image generation.

Image-to-Image Generation.

Inpainting.

Outpainting.

Upscaling

Video generation. (*as yet unexplored, personally)


Stylistic/Behavioural Differences:

Technical limitations:

As the model itself is trained in a 512x512 latent space, Stable Diffusion performs best at this resolution. Extreme aspect ratios (beyond 2:1) result in stretched images, or fisheye effects.

Recently, Stability AI have released Stable Diffusion 2.1, a larger model trained on the LAION 5b dataset. This version was also trained at a resolution of 768x768.

Thanks to community efforts, upscaling via a variety of methods is avaible within the AUTOMATIC1111 web-ui. Upscales go to 2048x2048.

Copyright & Legal:

Use Cases: