> otherwise, just use a dynamic language, you can do everything, even shoot yourself in the foot!
Type classes allow huge flexibility while maintaining type safety, to a much greater degree than fundeps allow.
> it's about whether the result makes sense
Which they do. Perhaps you have some examples of when type families confused you or made you perform an error?
> Type families as provided in Haskell are incompatible with univalence.
TFs aren't dependent types. However, they are on the right track. Fundeps are farther away from the right idea. Could you explain to me what's wrong with your example? I'm not up to date on HoTT, but it seems like there's nothing in principle wrong with pattern matching on elements of *. That seems like an important feature of type-level functions.
>The insistence on global unique instances?
Why is this a problem? It makes sense from a theoretical perspective (we don't associate multiple ordering properties with the things we call "the integers"), and it's very easy to use newtype wrappers to create new instances if needed.
> What does this even mean?
ML modules are flexible, but backwards from a theoretical perspective. Parametricity is something that should be embedded in the type system, not the module system.
> See here
Interesting example. However, I doubt that the syntactic cost of using such a system is less than the syntactic cost of enforcing global instance uniqueness and using newtype wrappers.
> TFs aren't dependent types. However, they are on the right track.
Dependent types are a good idea. The way Haskell attempts to approximate them is not. Parametricity is too good to give up. With the minor exception of reference cells (`IORef`, `STRef`, etc.), if two types are isomorphic, applying the same type constructor to them should yield isomorphic types.
You know what type families actually resemble? What C++ calls “traits”: ad-hoc specialized template classes containing type members.
> Fundeps are farther away from the right idea.
Functional dependencies are a consistent extension to type classes, which don't introduce a second source of ad-hoc polymorphism, unlike type families.
> Why is this a problem? It makes sense from a theoretical perspective (we don't associate multiple ordering properties with the things we call "the integers"),
What if I want to order them as Grey-coded numbers? In any case, the integers are far from the only type that can be given an order structure, and many types don't have a clear “bestest” order structure to be preferred over other possible ones.
> and it's very easy to use newtype wrappers to create new instances if needed.
Creating `newtype` wrappers is easy at the type level, but using them is super cumbersome at the term level.
> ML modules are flexible, but backwards from a theoretical perspective.
> Parametricity is something that should be embedded in the type system, not the module system.
It's type families, as done in Haskell, that violate parametricity! Standard ML has parametric polymorphism, uncompromised by questionable type system extensions.
> Interesting example. However, I doubt that the syntactic cost of using such a system is less than the syntactic cost of enforcing global instance uniqueness and using newtype wrappers.
I can't imagine it being more cumbersome than wrapping lots of terms in newtype wrappers just to satisfy the type class instance resolution system.
>Um, aren't functional dependencies an add-on to multiparameter type classes?
You're right, I meant "type families".
> I defined two type instances that violate the principle of not doing evil:
We're not doing abstract category theory; we're writing computer programs (well, I am). Have you ever run into a problem with type families in that capacity?
>if two types are isomorphic, applying the same type constructor to them should yield isomorphic types.
Agreed, but there's a difference between type functions and type constructors. TFs are (a limited form of) type functions. Value-level constructors admit lots of nice properties that value-level functions do not, and I see no reason to be uncomfortable with this being reflected at the type level.
> What if I want to order them as Grey-coded numbers
Use a newtype wrapper. Even if a language allowed ad-hoc instances, I would consider it messy practice to apply some weird non-intuitive ordering like this without specifically making a new type for it.
> Creating `newtype` wrappers is easy at the type level, but using them is super cumbersome at the term level.
And using ML-style modules is easy at the term level, but cumbersome at the type level.
It's a tradeoff, and I suspect that newtypes are usually the cleaner/easier solution.
> ML modules are plain System F-omega
I hadn't seen the 1ML project. That's pretty cool.
> It's type families, as done in Haskell, that violate parametricity!
How so? I really don't understand your argument here, if you just take TFs to be a limited form of type function.
> We're not doing abstract category theory; we're writing computer programs (well, I am). Have you ever run into a problem with type families in that capacity?
I like being able to reason about my programs. For that to be a smooth process, the language has to be mathematically civilized.
> Agreed, but there's a difference between type functions and type constructors. TFs are (a limited form of) type functions.
By “type families”, I meant both data families and type families. Case-analyzing types is the problem, see below.
> And using ML-style modules is easy at the term level, but cumbersome at the type level.
Actually, ML-style modules are also more convenient at the type level too! If I want to make a type constructor parameterized by 15 type arguments, rather than a normal type constructor in the core language, I make a ML-style functor parameterized by a structure containing 15 abstract type members.
> How so? I really don't understand your argument here, if you just take TFs to be a limited form of type function.
“In programming language theory, parametricity is an abstract uniformity property enjoyed by parametrically polymorphic functions, which captures the intuition that all instances of a polymorphic function act the same way.”
> I make a ML-style functor parameterized by a structure containing 15 abstract type members.
You can do this in Haskell with DataKinds (you just pass around a type of the correct kind which contains all the parameters). Admittedly, it is quite clunky at the moment. I did this to pass around CPU configuration objects for hardware synthesis a la Clash, as CPU designs are often parametrized over quite a few Nats.
> parametricity is an abstract uniformity property enjoyed by parametrically polymorphic functions
Whenever one introduces a typeclass constraint to a function, one can only assume that the function exhibits uniform behavior up to the differences introduced by different instances of the typeclass. There is no particular reason to assume that (+) has the same behavior for Int and Word, except insofar as we have some traditional understanding of how addition should work and which laws it should respect. The same is true for type families. It is not a problem that they introduce non-uniform behavior; we can only ask that they respect some specified rules with respect to their argument and result types.
Case-analyzing types in type families is no worse than writing a typeclass instance for a concrete type. Would you say that the fact that "instance Ord Word" and "instance Ord Int" are non-isomorphic is a problem? After all, the types themselves are isomorphic!
> Whenever one introduces a typeclass constraint to a function, one can only assume that the function exhibits uniform behavior up to the differences introduced by different instances of the typeclass.
Of course.
> Would you say that the fact that "instance Ord Word" and "instance Ord Int" are non-isomorphic is a problem? After all, the types themselves are isomorphic!
It's already bad enough, but at least the existence of non-uniform behavior is evident in a type signature containing type class constraints. OTOH, type families are sneaky, because they don't look any different from normal type constructors or synonyms.
>OTOH, type families are sneaky, because they don't look any different from normal type constructors or synonyms.
That is fair.
I think we're on the same page at this point. You have made me realize that ML-style modules are useful in ways I did not realize before, so thanks for that.
Question: How would you feel if the tradition was to do something like
insert :: Ord a f => a -> Set f a -> Set f a
That is, "f" is some type that indicates a particular ordering among "a"s. Then, "Set"s are parametrized over both "f" and "a", and one cannot accidentally mix up Sets that use a different Ord instance.
Seems a lot more cumbersome than the direct ML solution:
signature ORD =
sig
type t
val <= : t * t -> t
end
functor RedBlackSet (E : ORD) :> SET =
struct
type elem = E.t
datatype set
= Empty
| Red of set * elem * set
| Black of set * elem * set
(* ... *)
end
structure Foo = RedBlackSet (Int)
structure Bar = RedBlackSet (Backwards (Int))
(* Foo.set and Bar.set are different abstract types! *)
> Parametricity is too good to give up. With the minor exception of reference cells (`IORef`, `STRef`, etc.), if two types are isomorphic, applying the same type constructor to them should yield isomorphic types.
You know that's not what parametricity means, right? Like, at all?
Here's a challenge.
`foo :: forall a. a -> a`
Now, by parametricity that should have only one inhabitant (upto iso). Use your claimed break in parametricity from type families and provide me two distinct inhabitants.
i should have specified "modulo bottom" because i somehow didn't cotton i was talking to someone more interested in pedantry than actual discussion.
that said, constructing an inhabitant of false a _different_ way (when we can already write "someFalse = someFalse") is not particularly interesting, and again doesn't speak to parametricity in any direct way.
Type classes allow huge flexibility while maintaining type safety, to a much greater degree than fundeps allow.
> it's about whether the result makes sense
Which they do. Perhaps you have some examples of when type families confused you or made you perform an error?
> Type families as provided in Haskell are incompatible with univalence.
TFs aren't dependent types. However, they are on the right track. Fundeps are farther away from the right idea. Could you explain to me what's wrong with your example? I'm not up to date on HoTT, but it seems like there's nothing in principle wrong with pattern matching on elements of *. That seems like an important feature of type-level functions.
>The insistence on global unique instances?
Why is this a problem? It makes sense from a theoretical perspective (we don't associate multiple ordering properties with the things we call "the integers"), and it's very easy to use newtype wrappers to create new instances if needed.
> What does this even mean?
ML modules are flexible, but backwards from a theoretical perspective. Parametricity is something that should be embedded in the type system, not the module system.
> See here
Interesting example. However, I doubt that the syntactic cost of using such a system is less than the syntactic cost of enforcing global instance uniqueness and using newtype wrappers.