NAME
DBIx::Class::ResultDDL - Sugar methods for declaring DBIx::Class::Result data definitions
VERSION
version 0.90_02
SYNOPSIS
package MyApp::Schema::Result::Artist;
use DBIx::Class::ResultDDL -V1;
table 'artist';
col id => integer, unsigned, auto_inc;
col name => varchar(25), null;
primary_key 'id';
idx artist_by_name => [ 'name' ];
has_many albums => { id => 'Album.artist_id' };
rel_many impersonators => { name => 'Artist.name' };
DESCRIPTION
This is Yet Another Sugar Module for building DBIC result classes. It provides a domain-specific-language that feels almost like writing DDL.
This module heavily pollutes your symbol table in the name of extreme convenience, so the -Vx
option has the added feature of automatically removing those symbols at end-of-scope as if you had said use namespace::clean;
.
This module has a versioned API, to help prevent name collisions. If you request the -Vx
behavior, you can rely on that to remain the same across upgrades.
EXPORTED FEATURES
This module is based on Exporter::Extensible, allowing all the import notations that module provides. Every export beginning with a dash indicates custom behavior, rather than just a simple export of symbols.
-swp
"Strict, Warnings, Parent".
Enable use strict
and use warnings
unless those flags have been changed from the default via other means. In other words, you can still use Moo
or use common::sense
without this option overriding your choice.
Then, use parent "DBIx::Class::Core"
unless the class already has an add_column
method. If add_column
exists it is presumably because you already declared a parent class. Note that this check happens at BEGIN-time, so if you use Moo and extends 'SomeClass';
you need to wrap that in a begin block before the use DBIx::Class::ResultDDL -V1
line.
-autoclean
Remove all added symbols at the end of current scope.
-V1
Implies -swp
, :V1
, and -autoclean
.
-V0
Implies -swp
, :V0
, and -autoclean
.
-inflate_datetime
Inflate all date columns to DateTime objects, by adding the DBIC component DBIx::Class::InflateColumn::DateTime.
-inflate_json
Causes all columns declared with json
or jsonb
sugar methods to also declare inflate_json
.
EXPORTED COLLECTIONS
:V1
This tag selects the following symbols:
table view
col
null default auto_inc fk
integer unsigned tinyint smallint bigint decimal numeric money
float float4 float8 double real
char varchar nchar nvarchar binary varbinary blob text ntext uuid
date datetime timestamp enum bool boolean
json jsonb inflate_json array
primary_key idx create_index unique sqlt_add_index sqlt_add_constraint
rel_one rel_many has_one might_have has_many belongs_to many_to_many
ddl_cascade dbic_cascade
:V0
See DBIx::Class::ResultDDL::V0. The primary difference from V1 is lack of array column support, lack of index declaration support, and sugar methods do not pass through leftover unknown arguments. Also new Postgres column types were added in V1.
EXPORTED FUNCTIONS
table
table 'foo';
# becomes...
__PACKAGE__->table('foo');
col
col $name, @options;
# becomes...
__PACKAGE__->add_column($name, { is_nullable => 0, @merged_options });
Define a column. This calls add_column after sensibly merging all your options. It defaults the column to not-null for you, but you can override that by saying null
in your options. You will probably use many of the methods below to build the options for the column:
- null
-
is_nullable => 1
- auto_inc
-
is_auto_increment => 1, 'extra.auto_increment_type' => 'monotonic'
(The 'monotonic' bit is required to correctly deploy on SQLite. You can read the gory details but the short version is that SQLite gives you "fake" autoincrement by default, and you only get real ANSI-style autoincrement if you ask for it. SQL::Translator doesn't ask for the extra work by default, but if you're declaring columns by hand expecting it to be platform-neutral, then you probably want this. SQLite also requires data_type "integer", and for it to be the primary key.)
- fk
-
is_foreign_key => 1
- default($value | @value)
-
default_value => $value default_value => [ @value ] # if more than one param
integer
,integer($size)
,integer[]
,integer $size,[]
-
data_type => 'integer', size => $size || 11 data_type => 'integer[]', size => $size || 11
unsigned
-
extra => { unsigned => 1 }
MySQL specific flag which can be combined with
integer
tinyint
-
data_type => 'tinyint', size => 4
smallint
-
data_type => 'smallint', size => 6
bigint
-
data_type => 'bigint', size => 22
numeric
,numeric($p)
,numeric($p,$s)
,numeric[]
,numeric $p,$s,[]
-
data_type => 'numeric' data_type => 'numeric', size => [ $p ] data_type => 'numeric', size => [ $p, $s ] data_type => 'numeric[]' data_type => 'numeric[]', size => [ $p, $s ]
decimal
-
Identical to
numeric
, but setsdata_type => 'decimal'
money
,money[]
-
data_type => 'money' data_type => 'money[]'
real
,real[]
-
data_type => 'real' data_type => 'real[]'
float4
,float4[]
-
data_type => 'float4' data_type => 'float4[]'
double
,double[]
-
data_type => 'double precision' data_type => 'double precision[]'
float8
,float8[]
-
data_type => 'float8' data_type => 'float8[]'
float
,float($bits)
,float[]
,float $bits,[]
-
data_type => 'float' data_type => 'float', size => $bits data_type => 'float[]' data_type => 'float[], size => $bits
SQLServer and Postgres offer this, where
$bits
is the number of bits of precision of the mantissa. Array notation is supported for Postgres. char
,char($size)
,char[]
,char $size,[]
-
data_type => 'char', size => $size // 1 data_type => 'char[]', size => $size // 1
varchar
,varchar($size)
,varchar(MAX)
,varchar[]
,varchar $size,[]
-
data_type => 'varchar' data_type => 'varchar', size => $size # "MAX" is a valid size for SQL Server data_type => 'varchar[]' data_type => 'varchar[]', size => $size
nchar
,nchar($size)
-
SQL Server specific type for unicode char. Same API as
char
. nvarchar
-
SQL Server specific type for unicode varying character string. Same API as
varchar
. MAX
-
Constant for 'MAX', used by SQL Server for
varchar(MAX)
. binary
,binary($size)
,binary[]
,binary $size,[]
-
data_type => 'binary' data_type => 'binary', size => $size data_type => 'binary[]' data_type => 'binary[]', size => $size
varbinary
,varbinary($size)
,varbinary[]
,varbinary $size,[]
-
data_type => 'varbinary' data_type => 'varbinary', size => $size data_type => 'varbinary[]' data_type => 'varbinary[]', size => $size
bit
,bit($size)
,bit[]
,bit $size,[]
-
data_type => 'bit', size => $size // 1 data_type => 'bit[]', size => $size // 1
Note that Postgres allows length-N bit strings, and arrays of length-N bit strings, but SQL Server uses this same type name to represent a single bit.
varbit
,varbit($size)
,varbit[]
,varbit $size,[]
-
data_type => 'varbit' data_type => 'varbit', size => $size data_type => 'varbit[]' data_type => 'varbit[]', size => $size
blob
,blob($size)
-
data_type => 'blob', size => $size if defined $size
Note: For MySQL, you need to change the type according to '$size'. A MySQL blob is
2^16
max length, and probably none of your binary data would be that small. Considermediumblob
orlongblob
, or consider overridingMy::Schema::sqlt_deploy_hook
to perform this conversion automatically according to which DBMS you are connected to.For SQL Server, newer versions deprecate
blob
in favor ofVARCHAR(MAX)
. This is another detail you might take care of in sqlt_deploy_hook. bytea
,bytea[]
-
Postgres's blob type. (no size is allowed)
tinyblob
-
MySQL-specific type for small blobs
data_type => 'tinyblob', size => 0xFF
mediumblob
-
MySQL-specific type for larger blobs
data_type => 'mediumblob', size => 0xFFFFFF
longblob
-
MySQL-specific type for the longest supported blob type
data_type => 'longblob', size => 0xFFFFFFFF
text
,text($size)
,text[]
-
data_type => 'text', data_type => 'text', size => $size data_type => 'text[]'
See MySQL notes in
blob
. For SQL Server, you might wantntext
orvarchar(MAX)
instead. Postgres does not use a size, and allows arrays of this type. tinytext
-
data_type => 'tinytext', size => 0xFF
mediumtext
-
data_type => 'mediumtext', size => 0xFFFFFF
longtext
-
data_type => 'longtext', size => 0xFFFFFFFF
ntext
,ntext($size)
-
SQL-Server specific type for unicode
text
. Note that newer versions prefernvarchar(MAX)
.data_type => 'ntext', size => $size // 0x3FFFFFFF
enum( @values )
-
data_type => 'enum', extra => { list => [ @values ] }
bool
,boolean
-
data_type => 'boolean'
Note that SQL Server has 'bit' instead, though in postgres 'bit' is used for bitstrings.
- date, date($timezone)
-
data_type => 'date' time_zone => $timezone if defined $timezone
- datetime, datetime($timezone)
-
data_type => 'datetime' time_zone => $timezone if defined $timezone
- timestamp, timestamp($timezone)
-
date_type => 'timestamp' time_zone => $timezone if defined $timezone
array($type)
-
Declares a postgres array type with notation
data_type => $type . '[]'
uuid
,uuid[]
-
data_type => 'uuid' data_type => 'uuid[]'
json
,json[]
-
data_type => 'json' data_type => 'json[]'
If
-inflate_json
use-line option was given, this will additionally implyserializer_class => 'JSON'
. jsonb
,jsonb[]
-
data_type => 'jsonb' data_type => 'jsonb[]'
If
-inflate_json
use-line option was given, this will additionally implyserializer_class => 'JSON'
. - inflate_json
-
serializer_class => 'JSON'
Also adds the component 'InflateColumn::Serializer' to the current package if it wasn't added already.
primary_key
primary_key(@cols)
Shortcut for __PACKAGE__->set_primary_key(@cols)
unique
unique($name?, \@cols)
Shortucut for __PACKAGE__->add_unique_constraint($name? \@cols)
belongs_to
belongs_to $rel_name, $peer_class, $condition, @attr_list;
belongs_to $rel_name, { colname => "$ResultClass.$colname" }, @attr_list;
# becomes...
__PACKAGE__->belongs_to($rel_name, $peer_class, $condition, { @attr_list });
Note that the normal DBIC belongs_to requires conditions to be of the form
{ "foreign.$their_col" => "self.$my_col" }
but all these sugar functions allow it to be written the other way around, and use a table name in place of "foreign.".
might_have
might_have $rel_name, $peer_class, $condition, @attr_list;
might_have $rel_name, { colname => "$ResultClass.$colname" }, @attr_list;
# becomes...
__PACKAGE__->might_have($rel_name, $peer_class, $condition, { @attr_list });
has_one
has_one $rel_name, $peer_class, $condition, @attr_list;
has_one $rel_name, { colname => "$ResultClass.$colname" }, @attr_list;
# becomes...
__PACKAGE__->has_one($rel_name, $peer_class, $condition, { @attr_list });
has_many
has_many $rel_name, $peer_class, $condition, @attr_list;
has_many $rel_name, { colname => "$ResultClass.$colname" }, @attr_list;
# becomes...
__PACKAGE__->has_many($rel_name, $peer_class, $condition, { @attr_list });
many_to_many
many_to_many $name => $rel_to_linktable, $rel_from_linktable;
# becomes...
__PACKAGE__->many_to_many(@_);
rel_one
Declares a single-record left-join relation without implying ownership. Note that the DBIC relations that do imply ownership like might_have
cause an implied deletion of the related row if you delete a row from this table that references it, even if your schema did not have a cascading foreign key. This DBIC feature is controlled by the cascading_delete
option, and using this sugar function to set up the relation defaults that feature to "off".
rel_one $rel_name, $peer_class, $condition, @attr_list;
rel_one $rel_name, { $mycol => "$ResultClass.$fcol", ... }, @attr_list;
# becomes...
__PACKAGE__->add_relationship(
$rel_name, $peer_class, { "foreign.$fcol" => "self.$mycol" },
{
join_type => 'LEFT',
accessor => 'single',
cascade_copy => 0,
cascade_delete => 0,
is_depends_on => $is_f_pk, # auto-detected, unless specified
($is_f_pk? fk_columns => { $mycol => 1 } : ()),
@attr_list
}
);
rel_many
rel_many $name => { $my_col => "$class.$col", ... }, @options;
Same as "rel_one", but generates a one-to-many relation with a multi-accessor.
ddl_cascade
ddl_cascade; # same as ddl_cascade("CASCADE");
ddl_cascade(1); # same as ddl_cascade("CASCADE");
ddl_cascade(0); # same as ddl_cascade("RESTRICT");
ddl_cascade($mode);
Helper method to generate @options
for above. It generates
on_update => $mode, on_delete => $mode
This does not affect client-side cascade, and is only used by Schema::Loader to generate DDL for the foreign keys when the table is deployed.
dbic_cascade
dbic_cascade; # same as dbic_cascade(1)
dbic_cascade($enabled);
Helper method to generate @options
for above. It generates
cascade_copy => $enabled, cascade_delete => $enabled
This re-enables the dbic-side cascading that was disabled by default in the rel_
functions.
view
view $view_name, $view_sql, %options;
Makes the current resultsource into a view. This is used instead of 'table'. Takes two options, 'is_virtual', to make this into a virtual view, and 'depends' to list tables this view depends on.
Is the equivalent of
__PACKAGE__->table_class('DBIx::Class::ResultSource::View');
__PACKAGE__->table($view_name);
__PACKAGE__->result_source_instance->view_definition($view_sql);
__PACKAGE__->result_source_instance->deploy_depends_on($options{depends});
__PACKAGE__->result_source_instance->is_virtual($options{is_virtual});
INDEXES AND CONSTRAINTS
DBIx::Class doesn't actually track the indexes or constraints on a table. If you want to add these to be automatically deployed with your schema, you need an sqlt_deploy_hook
function. This module can create one for you, but does not yet attempt to wrap one that you provide. (You can of course wrap the one generated by this module using a method modifier from Class::Method::Modifiers) The method sqlt_deploy_hook
is created in the current package the first time one of these functions are called. If it already exists and wasn't created by DBIx::Class::ResultDDL, it will throw an exception. The generated method does call maybe::next::method
for you.
sqlt_add_index
This is a direct passthrough to the function "add_index" in SQL::Translator::Schema::Table, without any magic.
See notes above about the generated sqlt_deploy_hook
.
sqlt_add_constraint
This is a direct passthrough to the function "add_constraint" in SQL::Translator::Schema::Table, without any magic.
See notes above about the generated sqlt_deploy_hook
.
create_index
create_index $index_name => \@fields, %options;
This is sugar for sqlt_add_index. It translates to
sqlt_add_index( name => $index_name, fields => \@fields, options => \%options, (type => ?) );
where the %options
are the "options" in SQL::Translator::Schema::Index, except if one of the keys is type
, then that key/value gets pulled out and used as "type" in SQL::Translator::Schema::Index.
idx
Alias for "create_index"; lines up nicely with 'col'.
MISSING FUNCTIONALITY
The methods above in most cases allow you to insert plain-old-DBIC notation where appropriate, instead of relying purely on sugar methods. If you are missing your favorite column flag or something, feel free to contribute a patch.
THANKS
Thanks to Clippard Instrument Laboratory Inc. and Ellis Partners in Management Solutions for supporting open source, including portions of this module.
AUTHOR
Michael Conrad <mconrad@intellitree.com>
COPYRIGHT AND LICENSE
This software is copyright (c) 2021 by Michael Conrad, IntelliTree Solutions llc.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.