Skip to content

Dataset

Contained within this file are experimental interfaces for working with the Synapse Python Client. Unless otherwise noted these interfaces are subject to change at any time. Use at your own risk.

API reference

synapseclient.models.Dataset dataclass

Bases: DatasetSynchronousProtocol, AccessControllable, ViewBase, ViewStoreMixin, DeleteMixin, ColumnMixin, GetMixin, QueryMixin, ViewUpdateMixin, ViewSnapshotMixin

A Dataset object represents the metadata of a Synapse Dataset. https://rest-docs.synapse.org/rest/org/sagebionetworks/repo/model/table/Dataset.html

ATTRIBUTE DESCRIPTION
id

The unique immutable ID for this dataset. A new ID will be generated for new Datasets. Once issued, this ID is guaranteed to never change or be re-issued

TYPE: Optional[str]

name

The name of this dataset. Must be 256 characters or less. Names may only contain: letters, numbers, spaces, underscores, hyphens, periods, plus signs, apostrophes, and parentheses

TYPE: Optional[str]

description

The description of the dataset. Must be 1000 characters or less.

TYPE: Optional[str]

etag

Synapse employs an Optimistic Concurrency Control (OCC) scheme to handle concurrent updates. Since the E-Tag changes every time an entity is updated it is used to detect when a client's current representation of an entity is out-of-date.

TYPE: Optional[str]

created_on

The date this dataset was created.

TYPE: Optional[str]

modified_on

The date this dataset was last modified. In YYYY-MM-DD-Thh:mm:ss.sssZ format

TYPE: Optional[str]

created_by

The ID of the user that created this dataset.

TYPE: Optional[str]

modified_by

The ID of the user that last modified this dataset.

TYPE: Optional[str]

parent_id

The ID of the Entity that is the parent of this dataset.

TYPE: Optional[str]

columns

The columns of this dataset. This is an ordered dictionary where the key is the name of the column and the value is the Column object. When creating a new instance of a Dataset object you may pass any of the following types as the columns argument:

  • A list of Column objects
  • A dictionary where the key is the name of the column and the value is the Column object
  • An OrderedDict where the key is the name of the column and the value is the Column object

The order of the columns will be the order they are stored in Synapse. If you need to reorder the columns the recommended approach is to use the .reorder_column() method. Additionally, you may add, and delete columns using the .add_column(), and .delete_column() methods on your dataset class instance.

You may modify the attributes of the Column object to change the column type, name, or other attributes. For example, suppose you'd like to change a column from a INTEGER to a DOUBLE. You can do so by changing the column type attribute of the Column object. The next time you store the dataset the column will be updated in Synapse with the new type.

from synapseclient import Synapse
from synapseclient.models import Synapse
from synapseclient.models import Column, ColumnType

syn = Synapse()
syn.login()

dataset = Dataset(id="syn1234").get()
dataset.columns["my_column"].column_type = ColumnType.DOUBLE
dataset.store()

Note that the keys in this dictionary should match the column names as they are in Synapse. However, know that the name attribute of the Column object is used for all interactions with the Synapse API. The OrderedDict key is purely for the usage of this interface. For example, if you wish to rename a column you may do so by changing the name attribute of the Column object. The key in the OrderedDict does not need to be changed. The next time you store the dataset the column will be updated in Synapse with the new name and the key in the OrderedDict will be updated.

TYPE: Optional[Union[List[Column], OrderedDict[str, Column], Dict[str, Column]]]

version_number

The version number issued to this version on the object.

TYPE: Optional[int]

version_label

The version label for this dataset.

TYPE: Optional[str]

version_comment

The version comment for this dataset.

TYPE: Optional[str]

is_latest_version

If this is the latest version of the object.

TYPE: Optional[bool]

is_search_enabled

When creating or updating a dataset or view specifies if full text search should be enabled. Note that enabling full text search might slow down the indexing of the dataset or view.

TYPE: Optional[bool]

items

The flat list of file entity references that define this dataset. This is effectively

TYPE: Optional[List[EntityRef]]

size

The cumulative size, in bytes, of all items (files) in the dataset. This is only correct after the dataset has been stored or newly read from Synapse.

TYPE: Optional[int]

checksum

The checksum is computed over a sorted concatenation of the checksums of all items in the dataset. This is only correct after the dataset has been stored or newly read from Synapse.

TYPE: Optional[str]

count

The number of items/files in the dataset. This is only correct after the dataset has been stored or newly read from Synapse.

TYPE: Optional[int]

activity

The Activity model represents the main record of Provenance in Synapse. It is analogous to the Activity defined in the W3C Specification on Provenance.

TYPE: Optional[Activity]

annotations

Additional metadata associated with the dataset. The key is the name of your desired annotations. The value is an object containing a list of values (use empty list to represent no values for key) and the value type associated with all values in the list.

TYPE: Optional[Dict[str, Union[List[str], List[bool], List[float], List[int], List[date], List[datetime]]]]

include_default_columns

When creating a dataset or view, specifies if default columns should be included. Default columns are columns that are automatically added to the dataset or view. These columns are managed by Synapse and cannot be modified. If you attempt to create a column with the same name as a default column, you will receive a warning when you store the dataset.

include_default_columns is only used if this is the first time that the view is being stored. If you are updating an existing view this attribute will be ignored. If you want to add all default columns back to your view then you may use this code snippet to accomplish this:

import asyncio
from synapseclient import Synapse
from synapseclient.models import Dataset

syn = Synapse()
syn.login()

async def main():
    view = await Dataset(id="syn1234").get_async()
    await view._append_default_columns()
    await view.store_async()

asyncio.run(main())

The column you are overriding will not behave the same as a default column. For example, suppose you create a column called id on a Dataset. When using a default column, the id stores the Synapse ID of each of the entities included in the scope of the view. If you override the id column with a new column, the id column will no longer store the Synapse ID of the entities in the view. Instead, it will store the values you provide when you store the dataset. It will be stored as an annotation on the entity for the row you are modifying.

TYPE: Optional[bool]

Create a new dataset from a list of EntityRefs.

Dataset items consist of references to Synapse Files using an Entity Reference. If you are adding items to a Dataset directly, you must provide them in the form of an EntityRef class instance.

from synapseclient import Synapse
from synapseclient.models import Dataset, EntityRef

syn = Synapse()
syn.login()

my_entity_refs = [EntityRef(id="syn1234"), EntityRef(id="syn1235"), EntityRef(id="syn1236")]
my_dataset = Dataset(parent_id="syn987", name="my-new-dataset", items=my_entity_refs)
my_dataset.store()
Add entities to an existing dataset.

Using add_item, you can add Synapse entities that are Files, Folders, or EntityRefs that point to a Synapse entity. If the entity is a Folder (or an EntityRef that points to a folder), all of the child Files within the Folder will be added to the Dataset recursively.

from synapseclient import Synapse
from synapseclient.models import Dataset, File, Folder, EntityRef

syn = Synapse()
syn.login()

my_dataset = Dataset(id="syn1234").get()

# Add a file to the dataset
my_dataset.add_item(File(id="syn1235"))

# Add a folder to the dataset
# All child files are recursively added to the dataset
my_dataset.add_item(Folder(id="syn1236"))

# Add an entity reference to the dataset
my_dataset.add_item(EntityRef(id="syn1237", version=1))

my_dataset.store()
Remove entities from a dataset.

 

from synapseclient import Synapse
from synapseclient.models import Dataset, File, Folder, EntityRef

syn = Synapse()
syn.login()

my_dataset = Dataset(id="syn1234").get()

# Remove a file from the dataset
my_dataset.remove_item(File(id="syn1235"))

# Remove a folder from the dataset
# All child files are recursively removed from the dataset
my_dataset.remove_item(Folder(id="syn1236"))

# Remove an entity reference from the dataset
my_dataset.remove_item(EntityRef(id="syn1237", version=1))

my_dataset.store()
Query data from a dataset.

 

from synapseclient import Synapse
from synapseclient.models import Dataset

syn = Synapse()
syn.login()

my_dataset = Dataset(id="syn1234").get()
row = my_dataset.query(query="SELECT * FROM syn1234 WHERE id = 'syn1235'")
print(row)
Add a custom column to a dataset.

 

from synapseclient import Synapse
from synapseclient.models import Dataset, Column, ColumnType

syn = Synapse()
syn.login()

my_dataset = Dataset(id="syn1234").get()
my_dataset.add_column(Column(name="my_annotation", column_type=ColumnType.STRING))
my_dataset.store()
Update custom column values in a dataset.

 

from synapseclient import Synapse
from synapseclient.models import Dataset

syn = Synapse()
syn.login()

my_dataset = Dataset(id="syn1234").get()
# my_annotation must already exist in the dataset as a custom column
modified_data = pd.DataFrame(
    {"id": ["syn1234"], "my_annotation": ["good data"]}
)
my_dataset.update_rows(values=modified_data, primary_keys=["id"], dry_run=False)
Save a snapshot of a dataset.

 

from synapseclient import Synapse
from synapseclient.models import Dataset

syn = Synapse()
syn.login()

my_dataset = Dataset(id="syn1234").get()
my_dataset.snapshot(comment="My first snapshot", label="My first snapshot")
Deleting a dataset

 

from synapseclient import Synapse
from synapseclient.models import Dataset

syn = Synapse()
syn.login()

Dataset(id="syn4567").delete()

Source code in synapseclient/models/dataset.py
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
@dataclass
@async_to_sync
class Dataset(
    DatasetSynchronousProtocol,
    AccessControllable,
    ViewBase,
    ViewStoreMixin,
    DeleteMixin,
    ColumnMixin,
    GetMixin,
    QueryMixin,
    ViewUpdateMixin,
    ViewSnapshotMixin,
):
    """A `Dataset` object represents the metadata of a Synapse Dataset.
    <https://rest-docs.synapse.org/rest/org/sagebionetworks/repo/model/table/Dataset.html>

    Attributes:
        id: The unique immutable ID for this dataset. A new ID will be generated for new
            Datasets. Once issued, this ID is guaranteed to never change or be re-issued
        name: The name of this dataset. Must be 256 characters or less. Names may only
            contain: letters, numbers, spaces, underscores, hyphens, periods, plus
            signs, apostrophes, and parentheses
        description: The description of the dataset. Must be 1000 characters or less.
        etag: Synapse employs an Optimistic Concurrency Control (OCC) scheme to handle
            concurrent updates. Since the E-Tag changes every time an entity is updated
            it is used to detect when a client's current representation of an entity is
            out-of-date.
        created_on: The date this dataset was created.
        modified_on: The date this dataset was last modified.
            In YYYY-MM-DD-Thh:mm:ss.sssZ format
        created_by: The ID of the user that created this dataset.
        modified_by: The ID of the user that last modified this dataset.
        parent_id: The ID of the Entity that is the parent of this dataset.
        columns: The columns of this dataset. This is an ordered dictionary where the key is the
            name of the column and the value is the Column object. When creating a new instance
            of a Dataset object you may pass any of the following types as the `columns` argument:

            - A list of Column objects
            - A dictionary where the key is the name of the column and the value is the Column object
            - An OrderedDict where the key is the name of the column and the value is the Column object

            The order of the columns will be the order they are stored in Synapse. If you need
            to reorder the columns the recommended approach is to use the `.reorder_column()`
            method. Additionally, you may add, and delete columns using the `.add_column()`,
            and `.delete_column()` methods on your dataset class instance.

            You may modify the attributes of the Column object to change the column
            type, name, or other attributes. For example, suppose you'd like to change a
            column from a INTEGER to a DOUBLE. You can do so by changing the column type
            attribute of the Column object. The next time you store the dataset the column
            will be updated in Synapse with the new type.

            ```python
            from synapseclient import Synapse
            from synapseclient.models import Synapse
            from synapseclient.models import Column, ColumnType

            syn = Synapse()
            syn.login()

            dataset = Dataset(id="syn1234").get()
            dataset.columns["my_column"].column_type = ColumnType.DOUBLE
            dataset.store()
            ```

            Note that the keys in this dictionary should match the column names as they are in
            Synapse. However, know that the name attribute of the Column object is used for
            all interactions with the Synapse API. The OrderedDict key is purely for the usage
            of this interface. For example, if you wish to rename a column you may do so by
            changing the name attribute of the Column object. The key in the OrderedDict does
            not need to be changed. The next time you store the dataset the column will be updated
            in Synapse with the new name and the key in the OrderedDict will be updated.
        version_number: The version number issued to this version on the object.
        version_label: The version label for this dataset.
        version_comment: The version comment for this dataset.
        is_latest_version: If this is the latest version of the object.
        is_search_enabled: When creating or updating a dataset or view specifies if full
            text search should be enabled. Note that enabling full text search might
            slow down the indexing of the dataset or view.
        items: The flat list of file entity references that define this dataset. This is effectively
        a list of the rows that are in/will be in the dataset after it is stored. The only way to add
        or remove rows is to add or remove items from this list.
        size: The cumulative size, in bytes, of all items (files) in the dataset. This is
            only correct after the dataset has been stored or newly read from Synapse.
        checksum: The checksum is computed over a sorted concatenation of the checksums
            of all items in the dataset. This is only correct after the dataset has been
            stored or newly read from Synapse.
        count: The number of items/files in the dataset. This is only correct after the
            dataset has been stored or newly read from Synapse.
        activity: The Activity model represents the main record of Provenance in
            Synapse. It is analogous to the Activity defined in the
            [W3C Specification](https://www.w3.org/TR/prov-n/) on Provenance.
        annotations: Additional metadata associated with the dataset. The key is the name
            of your desired annotations. The value is an object containing a list of
            values (use empty list to represent no values for key) and the value type
            associated with all values in the list.
        include_default_columns: When creating a dataset or view, specifies if default
            columns should be included. Default columns are columns that are
            automatically added to the dataset or view. These columns are managed by
            Synapse and cannot be modified. If you attempt to create a column with the
            same name as a default column, you will receive a warning when you store the
            dataset.

            **`include_default_columns` is only used if this is the first time that the
            view is being stored.** If you are updating an existing view this attribute
            will be ignored. If you want to add all default columns back to your view
            then you may use this code snippet to accomplish this:

            ```python
            import asyncio
            from synapseclient import Synapse
            from synapseclient.models import Dataset

            syn = Synapse()
            syn.login()

            async def main():
                view = await Dataset(id="syn1234").get_async()
                await view._append_default_columns()
                await view.store_async()

            asyncio.run(main())
            ```

            The column you are overriding will not behave the same as a default column.
            For example, suppose you create a column called `id` on a Dataset. When
            using a default column, the `id` stores the Synapse ID of each of the
            entities included in the scope of the view. If you override the `id` column
            with a new column, the `id` column will no longer store the Synapse ID of
            the entities in the view. Instead, it will store the values you provide when
            you store the dataset. It will be stored as an annotation on the entity for
            the row you are modifying.

    Example: Create a new dataset from a list of EntityRefs.
        Dataset items consist of references to Synapse Files using an Entity Reference.
        If you are adding items to a Dataset directly, you must provide them in the form of
        an `EntityRef` class instance.

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Dataset, EntityRef

        syn = Synapse()
        syn.login()

        my_entity_refs = [EntityRef(id="syn1234"), EntityRef(id="syn1235"), EntityRef(id="syn1236")]
        my_dataset = Dataset(parent_id="syn987", name="my-new-dataset", items=my_entity_refs)
        my_dataset.store()
        ```

    Example: Add entities to an existing dataset.
        Using `add_item`, you can add Synapse entities that are Files, Folders, or EntityRefs that point to a Synapse entity.
        If the entity is a Folder (or an EntityRef that points to a folder), all of the child Files
        within the Folder will be added to the Dataset recursively.

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Dataset, File, Folder, EntityRef

        syn = Synapse()
        syn.login()

        my_dataset = Dataset(id="syn1234").get()

        # Add a file to the dataset
        my_dataset.add_item(File(id="syn1235"))

        # Add a folder to the dataset
        # All child files are recursively added to the dataset
        my_dataset.add_item(Folder(id="syn1236"))

        # Add an entity reference to the dataset
        my_dataset.add_item(EntityRef(id="syn1237", version=1))

        my_dataset.store()
        ```

    Example: Remove entities from a dataset.
        &nbsp;


        ```python
        from synapseclient import Synapse
        from synapseclient.models import Dataset, File, Folder, EntityRef

        syn = Synapse()
        syn.login()

        my_dataset = Dataset(id="syn1234").get()

        # Remove a file from the dataset
        my_dataset.remove_item(File(id="syn1235"))

        # Remove a folder from the dataset
        # All child files are recursively removed from the dataset
        my_dataset.remove_item(Folder(id="syn1236"))

        # Remove an entity reference from the dataset
        my_dataset.remove_item(EntityRef(id="syn1237", version=1))

        my_dataset.store()
        ```

    Example: Query data from a dataset.
        &nbsp;

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Dataset

        syn = Synapse()
        syn.login()

        my_dataset = Dataset(id="syn1234").get()
        row = my_dataset.query(query="SELECT * FROM syn1234 WHERE id = 'syn1235'")
        print(row)
        ```

    Example: Add a custom column to a dataset.
        &nbsp;

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Dataset, Column, ColumnType

        syn = Synapse()
        syn.login()

        my_dataset = Dataset(id="syn1234").get()
        my_dataset.add_column(Column(name="my_annotation", column_type=ColumnType.STRING))
        my_dataset.store()
        ```

    Example: Update custom column values in a dataset.
        &nbsp;

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Dataset

        syn = Synapse()
        syn.login()

        my_dataset = Dataset(id="syn1234").get()
        # my_annotation must already exist in the dataset as a custom column
        modified_data = pd.DataFrame(
            {"id": ["syn1234"], "my_annotation": ["good data"]}
        )
        my_dataset.update_rows(values=modified_data, primary_keys=["id"], dry_run=False)
        ```

    Example: Save a snapshot of a dataset.
        &nbsp;

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Dataset

        syn = Synapse()
        syn.login()

        my_dataset = Dataset(id="syn1234").get()
        my_dataset.snapshot(comment="My first snapshot", label="My first snapshot")
        ```

    Example: Deleting a dataset
        &nbsp;
        ```python
        from synapseclient import Synapse
        from synapseclient.models import Dataset

        syn = Synapse()
        syn.login()

        Dataset(id="syn4567").delete()
        ```
    """

    id: Optional[str] = None
    """The unique immutable ID for this dataset. A new ID will be generated for new
    datasets. Once issued, this ID is guaranteed to never change or be re-issued"""

    name: Optional[str] = None
    """The name of this dataset. Must be 256 characters or less. Names may only
    contain: letters, numbers, spaces, underscores, hyphens, periods, plus signs,
    apostrophes, and parentheses"""

    description: Optional[str] = None
    """The description of this entity. Must be 1000 characters or less."""

    etag: Optional[str] = field(default=None, compare=False)
    """
    Synapse employs an Optimistic Concurrency Control (OCC) scheme to handle
    concurrent updates. Since the E-Tag changes every time an entity is updated it is
    used to detect when a client's current representation of an entity is out-of-date.
    """

    created_on: Optional[str] = field(default=None, compare=False)
    """The date this dataset was created."""

    modified_on: Optional[str] = field(default=None, compare=False)
    """The date this dataset was last modified. In YYYY-MM-DD-Thh:mm:ss.sssZ format"""

    created_by: Optional[str] = field(default=None, compare=False)
    """The ID of the user that created this dataset."""

    modified_by: Optional[str] = field(default=None, compare=False)
    """The ID of the user that last modified this dataset."""

    parent_id: Optional[str] = None
    """The ID of the Entity that is the parent of this dataset."""

    version_number: Optional[int] = field(default=None, compare=False)
    """The version number issued to this version on the object."""

    version_label: Optional[str] = None
    """The version label for this dataset."""

    version_comment: Optional[str] = None
    """The version comment for this dataset."""

    is_latest_version: Optional[bool] = field(default=None, compare=False)
    """If this is the latest version of the object."""

    is_search_enabled: Optional[bool] = None
    """When creating or updating a dataset or view specifies if full text search
    should be enabled. Note that enabling full text search might slow down the
    indexing of the dataset or view."""

    items: Optional[List[EntityRef]] = field(default_factory=list, compare=False)
    """The flat list of file entity references that define this dataset."""

    size: Optional[int] = field(default=None, compare=False)
    """The cumulative size, in bytes, of all items(files) in the dataset.

    This is only correct after the dataset has been stored or newly read from Synapse.
    """

    checksum: Optional[str] = field(default=None, compare=False)
    """The checksum is computed over a sorted concatenation of the checksums of all
    items in the dataset.

    This is only correct after the dataset has been stored or newly read from Synapse.
    """

    count: Optional[int] = field(default=None, compare=False)
    """The number of items/files in the dataset.

    This is only correct after the dataset has been stored or newly read from Synapse.
    """

    columns: Optional[
        Union[List[Column], OrderedDict[str, Column], Dict[str, Column]]
    ] = field(default_factory=OrderedDict, compare=False)
    """
    The columns of this dataset. This is an ordered dictionary where the key is the
    name of the column and the value is the Column object. When creating a new instance
    of a Dataset object you may pass any of the following types as the `columns` argument:

    - A list of Column objects
    - A dictionary where the key is the name of the column and the value is the Column object
    - An OrderedDict where the key is the name of the column and the value is the Column object

    The order of the columns will be the order they are stored in Synapse. If you need
    to reorder the columns the recommended approach is to use the `.reorder_column()`
    method. Additionally, you may add, and delete columns using the `.add_column()`,
    and `.delete_column()` methods on your dataset class instance.

    You may modify the attributes of the Column object to change the column
    type, name, or other attributes. For example, suppose you'd like to change a
    column from a INTEGER to a DOUBLE. You can do so by changing the column type
    attribute of the Column object. The next time you store the dataset the column
    will be updated in Synapse with the new type.

    ```python
    from synapseclient import Synapse
    from synapseclient.models import Table, Column, ColumnType

    syn = Synapse()
    syn.login()

    dataset = Dataset(id="syn1234").get()
    dataset.columns["my_column"].column_type = ColumnType.DOUBLE
    dataset.store()
    ```

    Note that the keys in this dictionary should match the column names as they are in
    Synapse. However, know that the name attribute of the Column object is used for
    all interactions with the Synapse API. The OrderedDict key is purely for the usage
    of this interface. For example, if you wish to rename a column you may do so by
    changing the name attribute of the Column object. The key in the OrderedDict does
    not need to be changed. The next time you store the dataset the column will be updated
    in Synapse with the new name and the key in the OrderedDict will be updated.
    """

    _columns_to_delete: Optional[Dict[str, Column]] = field(default_factory=dict)
    """
    Columns to delete when the dataset is stored. The key in this dict is the ID of the
    column to delete. The value is the Column object that represents the column to
    delete.
    """

    activity: Optional[Activity] = field(default=None, compare=False)
    """The Activity model represents the main record of Provenance in Synapse.  It is
    analogous to the Activity defined in the
    [W3C Specification](https://www.w3.org/TR/prov-n/) on Provenance."""

    annotations: Optional[
        Dict[
            str,
            Union[
                List[str],
                List[bool],
                List[float],
                List[int],
                List[date],
                List[datetime],
            ],
        ]
    ] = field(default_factory=dict, compare=False)
    """Additional metadata associated with the dataset. The key is the name of your
    desired annotations. The value is an object containing a list of values
    (use empty list to represent no values for key) and the value type associated with
    all values in the list. To remove all annotations set this to an empty dict `{}`"""

    _last_persistent_instance: Optional["Dataset"] = field(
        default=None, repr=False, compare=False
    )
    """The last persistent instance of this object. This is used to determine if the
    object has been changed and needs to be updated in Synapse."""

    view_entity_type: ViewEntityType = ViewEntityType.DATASET
    """The API model string for the type of view. This is used to determine the default columns that are
    added to the table. Must be defined as a `ViewEntityType` enum.
    """

    view_type_mask: ViewTypeMask = ViewTypeMask.DATASET
    """The Bit Mask representing Dataset type.
    As defined in the Synapse REST API:
    <https://rest-docs.synapse.org/rest/GET/column/tableview/defaults.html>"""

    def __post_init__(self):
        self.columns = self._convert_columns_to_ordered_dict(columns=self.columns)

    @property
    def has_changed(self) -> bool:
        """Determines if the object has been changed and needs to be updated in Synapse."""
        return (
            not self._last_persistent_instance
            or self._last_persistent_instance != self
            or (not self._last_persistent_instance.items and self.items)
            or self._last_persistent_instance.items != self.items
        )

    def _set_last_persistent_instance(self) -> None:
        """Stash the last time this object interacted with Synapse. This is used to
        determine if the object has been changed and needs to be updated in Synapse."""
        del self._last_persistent_instance
        self._last_persistent_instance = dataclasses.replace(self)
        self._last_persistent_instance.activity = (
            dataclasses.replace(self.activity) if self.activity else None
        )
        self._last_persistent_instance.columns = (
            OrderedDict(
                (key, dataclasses.replace(column))
                for key, column in self.columns.items()
            )
            if self.columns
            else OrderedDict()
        )
        self._last_persistent_instance.annotations = (
            deepcopy(self.annotations) if self.annotations else {}
        )
        self._last_persistent_instance.items = (
            [dataclasses.replace(item) for item in self.items] if self.items else []
        )

    def fill_from_dict(self, entity, set_annotations: bool = True) -> "Self":
        """
        Converts the data coming from the Synapse API into this datamodel.

        Arguments:
            synapse_table: The data coming from the Synapse API

        Returns:
            The Dataset object instance.
        """
        self.id = entity.get("id", None)
        self.name = entity.get("name", None)
        self.description = entity.get("description", None)
        self.parent_id = entity.get("parentId", None)
        self.etag = entity.get("etag", None)
        self.created_on = entity.get("createdOn", None)
        self.created_by = entity.get("createdBy", None)
        self.modified_on = entity.get("modifiedOn", None)
        self.modified_by = entity.get("modifiedBy", None)
        self.version_number = entity.get("versionNumber", None)
        self.version_label = entity.get("versionLabel", None)
        self.version_comment = entity.get("versionComment", None)
        self.is_latest_version = entity.get("isLatestVersion", None)
        self.is_search_enabled = entity.get("isSearchEnabled", False)
        self.size = entity.get("size", None)
        self.checksum = entity.get("checksum", None)
        self.count = entity.get("count", None)
        self.items = [
            EntityRef(id=item["entityId"], version=item["versionNumber"])
            for item in entity.get("items", [])
        ]

        if set_annotations:
            self.annotations = Annotations.from_dict(entity.get("annotations", {}))
        return self

    def to_synapse_request(self):
        """Converts the request to a request expected of the Synapse REST API."""

        entity = {
            "name": self.name,
            "description": self.description,
            "id": self.id,
            "etag": self.etag,
            "createdOn": self.created_on,
            "modifiedOn": self.modified_on,
            "createdBy": self.created_by,
            "modifiedBy": self.modified_by,
            "parentId": self.parent_id,
            "concreteType": concrete_types.DATASET_ENTITY,
            "versionNumber": self.version_number,
            "versionLabel": self.version_label,
            "versionComment": self.version_comment,
            "isLatestVersion": self.is_latest_version,
            "columnIds": (
                [
                    column.id
                    for column in self._last_persistent_instance.columns.values()
                ]
                if self._last_persistent_instance
                and self._last_persistent_instance.columns
                else []
            ),
            "isSearchEnabled": self.is_search_enabled,
            "items": (
                [item.to_synapse_request() for item in self.items] if self.items else []
            ),
            "size": self.size,
            "checksum": self.checksum,
            "count": self.count,
        }
        delete_none_keys(entity)
        result = {
            "entity": entity,
        }
        delete_none_keys(result)
        return result

    def _append_entity_ref(self, entity_ref: EntityRef) -> None:
        """Helper function to add an EntityRef to the items list of the dataset.
        Will not add duplicates.

        Arguments:
            entity_ref: The EntityRef to add to the items list of the dataset.
        """
        if entity_ref not in self.items:
            self.items.append(entity_ref)

    def add_item(
        self,
        item: Union[EntityRef, "File", "Folder"],
        *,
        synapse_client: Optional[Synapse] = None,
    ) -> None:
        """Adds an item in the form of an EntityRef to the dataset.
        For Folders, children are added recursively. Effect is not seen
        until the dataset is stored.

        Arguments:
            item: Entity to add to the dataset. Must be an EntityRef, File, or Folder.
            synapse_client: If not passed in and caching was not disabled by
                `Synapse.allow_client_caching(False)` this will use the last created
                instance from the Synapse class constructor.

        Raises:
            ValueError: If the item is not an EntityRef, File, or Folder

        Example: Add a file to a dataset.
            &nbsp;

            ```python
            from synapseclient import Synapse
            from synapseclient.models import Dataset, File

            syn = Synapse()
            syn.login()

            my_dataset = Dataset(id="syn1234").get()
            my_dataset.add_item(File(id="syn1235"))
            my_dataset.store()
            ```

        Example: Add a folder to a dataset.
            All child files are recursively added to the dataset.

            ```python
            from synapseclient import Synapse
            from synapseclient.models import Dataset, Folder

            syn = Synapse()
            syn.login()

            my_dataset = Dataset(id="syn1234").get()
            my_dataset.add_item(Folder(id="syn1236"))
            my_dataset.store()
            ```

        Example: Add an entity reference to a dataset.
            &nbsp;

            ```python
            from synapseclient import Synapse
            from synapseclient.models import Dataset, EntityRef

            syn = Synapse()
            syn.login()

            my_dataset = Dataset(id="syn1234").get()
            my_dataset.add_item(EntityRef(id="syn1237", version=1))
            my_dataset.store()
            ```
        """
        from synapseclient.models import File, Folder

        client = Synapse.get_client(synapse_client=synapse_client)

        if isinstance(item, EntityRef):
            self._append_entity_ref(entity_ref=item)
        elif isinstance(item, File):
            if not item.version_number:
                item = File(
                    id=item.id, version_number=item.version_number, download_file=False
                ).get()
            self._append_entity_ref(
                entity_ref=EntityRef(id=item.id, version=item.version_number)
            )
        elif isinstance(item, Folder):
            children = item._retrieve_children(follow_link=True)
            for child in children:
                if child["type"] == concrete_types.FILE_ENTITY:
                    self._append_entity_ref(
                        entity_ref=EntityRef(
                            id=child["id"], version=child["versionNumber"]
                        )
                    )
                else:
                    self.add_item(item=Folder(id=child["id"]), synapse_client=client)
        else:
            raise ValueError(
                f"item must be one of EntityRef, File, or Folder. {item} is a {type(item)}"
            )

    def _remove_entity_ref(self, entity_ref: EntityRef) -> None:
        """Helper function to remove an EntityRef from the items list of the dataset.

        Arguments:
            entity_ref: The EntityRef to remove from the items list of the dataset.
        """
        if entity_ref not in self.items:
            raise ValueError(f"Entity {entity_ref.id} not found in items list")
        self.items.remove(entity_ref)

    def remove_item(
        self,
        item: Union[EntityRef, "File", "Folder"],
        *,
        synapse_client: Optional[Synapse] = None,
    ) -> None:
        """
        Removes an item from the dataset. For Folders, all
        children of the folder are removed recursively.
        Effect is not seen until the dataset is stored.

        Arguments:
            item: The Synapse ID or Entity to remove from the dataset
            synapse_client: If not passed in and caching was not disabled by
                `Synapse.allow_client_caching(False)` this will use the last created
                instance from the Synapse class constructor.

        Returns:
            None

        Raises:
            ValueError: If the item is not a valid type

        Example: Remove a file from a dataset.
            &nbsp;

            ```python
            from synapseclient import Synapse
            from synapseclient.models import Dataset, File

            syn = Synapse()
            syn.login()

            my_dataset = Dataset(id="syn1234").get()
            my_dataset.remove_item(File(id="syn1235"))
            my_dataset.store()
            ```

        Example: Remove a folder from a dataset.
            All child files are recursively removed from the dataset.

            ```python
            from synapseclient import Synapse
            from synapseclient.models import Dataset, Folder

            syn = Synapse()
            syn.login()

            my_dataset = Dataset(id="syn1234").get()
            my_dataset.remove_item(Folder(id="syn1236"))
            my_dataset.store()
            ```

        Example: Remove an entity reference from a dataset.
            &nbsp;
            ```python
            from synapseclient import Synapse
            from synapseclient.models import Dataset, EntityRef

            syn = Synapse()
            syn.login()

            my_dataset = Dataset(id="syn1234").get()
            my_dataset.remove_item(EntityRef(id="syn1237", version=1))
            my_dataset.store()
            ```
        """
        from synapseclient.models import File, Folder

        client = Synapse.get_client(synapse_client=synapse_client)

        if isinstance(item, EntityRef):
            self._remove_entity_ref(item)
        elif isinstance(item, File):
            if not item.version_number:
                item = File(
                    id=item.id, version_number=item.version_number, download_file=False
                ).get()
            self._remove_entity_ref(EntityRef(id=item.id, version=item.version_number))
        elif isinstance(item, Folder):
            children = item._retrieve_children(follow_link=True)
            for child in children:
                if child["type"] == concrete_types.FILE_ENTITY:
                    self._remove_entity_ref(
                        EntityRef(id=child["id"], version=child["versionNumber"])
                    )
                else:
                    self.remove_item(item=Folder(id=child["id"]), synapse_client=client)
        else:
            raise ValueError(
                f"item must be one of str, EntityRef, File, or Folder, {item} is a {type(item)}"
            )

    async def store_async(
        self,
        dry_run: bool = False,
        *,
        job_timeout: int = 600,
        synapse_client: Optional[Synapse] = None,
    ) -> "Self":
        """Store information about a Dataset including the columns and annotations.
        Storing an update to the Dataset items will alter the rows present in the Dataset.
        Datasets have default columns that are managed by Synapse. The default behavior of
        this function is to include these default columns in the dataset when it is stored.
        This means that with the default behavior, any columns that you have added to your
        Dataset will be overwritten by the default columns if they have the same name. To
        avoid this behavior, set the `include_default_columns` attribute to `False`.

        Note the following behavior for the order of columns:

        - If a column is added via the `add_column` method it will be added at the
            index you specify, or at the end of the columns list.
        - If column(s) are added during the construction of your Dataset instance, ie.
            `Dataset(columns=[Column(name="foo")])`, they will be added at the beginning
            of the columns list.
        - If you use the `store_rows` method and the `schema_storage_strategy` is set to
            `INFER_FROM_DATA` the columns will be added at the end of the columns list.

        Arguments:
            dry_run: If True, will not actually store the table but will log to
                the console what would have been stored.
            job_timeout: The maximum amount of time to wait for a job to complete.
                This is used when updating the table schema. If the timeout
                is reached a `SynapseTimeoutError` will be raised.
                The default is 600 seconds
            synapse_client: If not passed in and caching was not disabled by
                `Synapse.allow_client_caching(False)` this will use the last created
                instance from the Synapse class constructor.

        Returns:
            The Dataset instance stored in synapse.

        Example: Create a new dataset from a list of EntityRefs by storing it.
            &nbsp;
            ```python
            import asyncio
            from synapseclient import Synapse
            from synapseclient.models import Dataset, EntityRef

            syn = Synapse()
            syn.login()

            async def main():
                my_entity_refs = [EntityRef(id="syn1234"), EntityRef(id="syn1235"), EntityRef(id="syn1236")]
                my_dataset = Dataset(parent_id="syn987", name="my-new-dataset", items=my_entity_refs)
                await my_dataset.store_async()

            asyncio.run(main())
            ```
        """
        return await super().store_async(
            dry_run=dry_run,
            job_timeout=job_timeout,
            synapse_client=synapse_client,
        )

    async def get_async(
        self,
        include_columns: bool = True,
        include_activity: bool = False,
        *,
        synapse_client: Optional[Synapse] = None,
    ) -> "Self":
        """Get the metadata about the Dataset from synapse.

        Arguments:
            include_columns: If True, will include fully filled column objects in the
                `.columns` attribute. Defaults to True.
            include_activity: If True the activity will be included in the Dataset
                if it exists. Defaults to False.

            synapse_client: If not passed in and caching was not disabled by
                `Synapse.allow_client_caching(False)` this will use the last created
                instance from the Synapse class constructor.

        Returns:
            The Dataset instance stored in synapse.

        Example: Getting metadata about a Dataset using id
            Get a Dataset by ID and print out the columns and activity. `include_columns`
            defaults to True and `include_activity` defaults to False. When you need to
            update existing columns or activity these need to be set to True during the
            `get_async` call, then you'll make the changes, and finally call the
            `.store_async()` method.

            ```python
            import asyncio
            from synapseclient import Synapse
            from synapseclient.models import Dataset

            syn = Synapse()
            syn.login()

            async def main():
                dataset = await Dataset(id="syn4567").get_async(include_activity=True)
                print(dataset)

                # Columns are retrieved by default
                print(dataset.columns)
                print(dataset.activity)

            asyncio.run(main())
            ```

        Example: Getting metadata about a Dataset using name and parent_id
            Get a Dataset by name/parent_id and print out the columns and activity.
            `include_columns` defaults to True and `include_activity` defaults to
            False. When you need to update existing columns or activity these need to
            be set to True during the `get_async` call, then you'll make the changes,
            and finally call the `.store_async()` method.

            ```python
            import asyncio
            from synapseclient import Synapse
            from synapseclient.models import Dataset

            syn = Synapse()
            syn.login()

            async def main():
                dataset = await Dataset(
                    name="my_dataset",
                    parent_id="syn1234"
                ).get_async(
                    include_columns=True,
                    include_activity=True
                )
                print(dataset)
                print(dataset.columns)
                print(dataset.activity)

            asyncio.run(main())
            ```
        """
        return await super().get_async(
            include_columns=include_columns,
            include_activity=include_activity,
            synapse_client=synapse_client,
        )

    async def delete_async(self, *, synapse_client: Optional[Synapse] = None) -> None:
        """Delete the dataset from synapse. This is not version specific. If you'd like
        to delete a specific version of the dataset you must use the
        [synapseclient.api.delete_entity][] function directly.

        Arguments:
            synapse_client: If not passed in and caching was not disabled by
                `Synapse.allow_client_caching(False)` this will use the last created
                instance from the Synapse class constructor.

        Returns:
            None

        Example: Deleting a dataset
            Deleting a dataset is only supported by the ID of the dataset.

            ```python
            import asyncio
            from synapseclient import Synapse
            from synapseclient.models import Dataset

            syn = Synapse()
            syn.login()

            async def main():
                await Dataset(id="syn4567").delete_async()

            asyncio.run(main())
            ```
        """
        await super().delete_async(synapse_client=synapse_client)

    async def update_rows_async(
        self,
        values: DATA_FRAME_TYPE,
        primary_keys: List[str],
        dry_run: bool = False,
        *,
        rows_per_query: int = 50000,
        update_size_bytes: int = 1.9 * MB,
        insert_size_bytes: int = 900 * MB,
        job_timeout: int = 600,
        wait_for_eventually_consistent_view: bool = False,
        wait_for_eventually_consistent_view_timeout: int = 600,
        synapse_client: Optional[Synapse] = None,
        **kwargs,
    ) -> None:
        """Update the values of rows in the dataset. This method can only
        be used to update values in custom columns. Default columns cannot be updated, but
        may be used as primary keys.

        Limitations:

        - When updating many rows the requests to Synapse will be chunked into smaller
            requests. The limit is 2MB per request. This chunking will happen
            automatically and should not be a concern for most users. If you are
            having issues with the request being too large you may lower the
            number of rows you are trying to update.
        - The `primary_keys` argument must contain at least one column.
        - The `primary_keys` argument cannot contain columns that are a LIST type.
        - The `primary_keys` argument cannot contain columns that are a JSON type.
        - The values used as the `primary_keys` must be unique in the table. If there
            are multiple rows with the same values in the `primary_keys` the behavior
            is that an exception will be raised.
        - The columns used in `primary_keys` cannot contain updated values. Since
            the values in these columns are used to determine if a row exists, they
            cannot be updated in the same transaction.

        Arguments:
            values: Supports storing data from the following sources:

                - A string holding the path to a CSV file. The data will be read into a
                    [Pandas DataFrame](http://pandas.pydata.org/pandas-docs/stable/api.html#dataframe).
                    The code makes assumptions about the format of the columns in the
                    CSV as detailed in the [csv_to_pandas_df][synapseclient.models.mixins.table_components.csv_to_pandas_df]
                    function. You may pass in additional arguments to the `csv_to_pandas_df`
                    function by passing them in as keyword arguments to this function.
                - A dictionary where the key is the column name and the value is one or
                    more values. The values will be wrapped into a [Pandas DataFrame](http://pandas.pydata.org/pandas-docs/stable/api.html#dataframe). You may pass in additional arguments to the `pd.DataFrame` function by passing them in as keyword arguments to this function. Read about the available arguments in the [Pandas DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) documentation.
                - A [Pandas DataFrame](http://pandas.pydata.org/pandas-docs/stable/api.html#dataframe)

            primary_keys: The columns to use to determine if a row already exists. If
                a row exists with the same values in the columns specified in this list
                the row will be updated. If a row does not exist nothing will be done.

            dry_run: If set to True the data will not be updated in Synapse. A message
                will be printed to the console with the number of rows that would have
                been updated and inserted. If you would like to see the data that would
                be updated and inserted you may set the `dry_run` argument to True and
                set the log level to DEBUG by setting the debug flag when creating
                your Synapse class instance like: `syn = Synapse(debug=True)`.

            rows_per_query: The number of rows that will be queried from Synapse per
                request. Since we need to query for the data that is being updated
                this will determine the number of rows that are queried at a time.
                The default is 50,000 rows.

            update_size_bytes: The maximum size of the request that will be sent to Synapse
                when updating rows of data. The default is 1.9MB.

            insert_size_bytes: The maximum size of the request that will be sent to Synapse
                when inserting rows of data. The default is 900MB.

            job_timeout: The maximum amount of time to wait for a job to complete.
                This is used when inserting, and updating rows of data. Each individual
                request to Synapse will be sent as an independent job. If the timeout
                is reached a `SynapseTimeoutError` will be raised.
                The default is 600 seconds

            wait_for_eventually_consistent_view: Only used if the table is a view. If
                set to True this will wait for the view to reflect any changes that
                you've made to the view. This is useful if you need to query the view
                after making changes to the data.

            wait_for_eventually_consistent_view_timeout: The maximum amount of time to
                wait for a view to be eventually consistent. The default is 600 seconds.

            synapse_client: If not passed in and caching was not disabled by
                `Synapse.allow_client_caching(False)` this will use the last created
                instance from the Synapse class constructor

            **kwargs: Additional arguments that are passed to the `pd.DataFrame`
                function when the `values` argument is a path to a csv file.


        Example: Update custom column values in a dataset.
            &nbsp;

            ```python
            import asyncio
            from synapseclient import Synapse
            from synapseclient.models import Dataset
            import pandas as pd

            syn = Synapse()
            syn.login()

            async def main():
                my_dataset = await Dataset(id="syn1234").get_async()

                # my_annotation must already exist in the dataset as a custom column
                modified_data = pd.DataFrame(
                    {"id": ["syn1234"], "my_annotation": ["good data"]}
                )
                await my_dataset.update_rows_async(values=modified_data, primary_keys=["id"], dry_run=False)

            asyncio.run(main())
            ```
        """
        await super().update_rows_async(
            values=values,
            primary_keys=primary_keys,
            dry_run=dry_run,
            rows_per_query=rows_per_query,
            update_size_bytes=update_size_bytes,
            insert_size_bytes=insert_size_bytes,
            job_timeout=job_timeout,
            wait_for_eventually_consistent_view=wait_for_eventually_consistent_view,
            wait_for_eventually_consistent_view_timeout=wait_for_eventually_consistent_view_timeout,
            synapse_client=synapse_client,
            **kwargs,
        )

    async def snapshot_async(
        self,
        *,
        comment: Optional[str] = None,
        label: Optional[str] = None,
        include_activity: bool = True,
        associate_activity_to_new_version: bool = True,
        synapse_client: Optional[Synapse] = None,
    ) -> "TableUpdateTransaction":
        """Creates a snapshot of the dataset. A snapshot is a saved, read-only version of the dataset
        at the time it was created. Dataset snapshots are created using the asyncronous job API.

        Arguments:
            comment: A unique comment to associate with the snapshot.
            label: A unique label to associate with the snapshot.
            include_activity: If True the activity will be included in snapshot if it
                exists. In order to include the activity, the activity must have already
                been stored in Synapse by using the `activity` attribute on the Dataset
                and calling the `store()` method on the Dataset instance. Adding an
                activity to a snapshot of a dataset is meant to capture the provenance of
                the data at the time of the snapshot. Defaults to True.
            associate_activity_to_new_version: If True the activity will be associated
                with the new version of the dataset. If False the activity will not be
                associated with the new version of the dataset. Defaults to True.
            synapse_client: If not passed in and caching was not disabled by
                `Synapse.allow_client_caching(False)` this will use the last created
                instance from the Synapse class constructor.

        Returns:
            A `TableUpdateTransaction` object which includes the version number of the snapshot.

        Example: Save a snapshot of a dataset.
            &nbsp;

            ```python
            import asyncio
            from synapseclient import Synapse
            from synapseclient.models import Dataset

            syn = Synapse()
            syn.login()

            async def main():
                my_dataset = await Dataset(id="syn1234").get_async()
                await my_dataset.snapshot_async(comment="My first snapshot", label="My first snapshot")

            asyncio.run(main())
            ```
        """
        return await super().snapshot_async(
            comment=comment,
            label=label,
            include_activity=include_activity,
            associate_activity_to_new_version=associate_activity_to_new_version,
            synapse_client=synapse_client,
        )

Functions

add_item

add_item(item: Union[EntityRef, File, Folder], *, synapse_client: Optional[Synapse] = None) -> None

Adds an item in the form of an EntityRef to the dataset. For Folders, children are added recursively. Effect is not seen until the dataset is stored.

PARAMETER DESCRIPTION
item

Entity to add to the dataset. Must be an EntityRef, File, or Folder.

TYPE: Union[EntityRef, File, Folder]

synapse_client

If not passed in and caching was not disabled by Synapse.allow_client_caching(False) this will use the last created instance from the Synapse class constructor.

TYPE: Optional[Synapse] DEFAULT: None

RAISES DESCRIPTION
ValueError

If the item is not an EntityRef, File, or Folder

Add a file to a dataset.

 

from synapseclient import Synapse
from synapseclient.models import Dataset, File

syn = Synapse()
syn.login()

my_dataset = Dataset(id="syn1234").get()
my_dataset.add_item(File(id="syn1235"))
my_dataset.store()
Add a folder to a dataset.

All child files are recursively added to the dataset.

from synapseclient import Synapse
from synapseclient.models import Dataset, Folder

syn = Synapse()
syn.login()

my_dataset = Dataset(id="syn1234").get()
my_dataset.add_item(Folder(id="syn1236"))
my_dataset.store()
Add an entity reference to a dataset.

 

from synapseclient import Synapse
from synapseclient.models import Dataset, EntityRef

syn = Synapse()
syn.login()

my_dataset = Dataset(id="syn1234").get()
my_dataset.add_item(EntityRef(id="syn1237", version=1))
my_dataset.store()
Source code in synapseclient/models/dataset.py
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
def add_item(
    self,
    item: Union[EntityRef, "File", "Folder"],
    *,
    synapse_client: Optional[Synapse] = None,
) -> None:
    """Adds an item in the form of an EntityRef to the dataset.
    For Folders, children are added recursively. Effect is not seen
    until the dataset is stored.

    Arguments:
        item: Entity to add to the dataset. Must be an EntityRef, File, or Folder.
        synapse_client: If not passed in and caching was not disabled by
            `Synapse.allow_client_caching(False)` this will use the last created
            instance from the Synapse class constructor.

    Raises:
        ValueError: If the item is not an EntityRef, File, or Folder

    Example: Add a file to a dataset.
        &nbsp;

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Dataset, File

        syn = Synapse()
        syn.login()

        my_dataset = Dataset(id="syn1234").get()
        my_dataset.add_item(File(id="syn1235"))
        my_dataset.store()
        ```

    Example: Add a folder to a dataset.
        All child files are recursively added to the dataset.

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Dataset, Folder

        syn = Synapse()
        syn.login()

        my_dataset = Dataset(id="syn1234").get()
        my_dataset.add_item(Folder(id="syn1236"))
        my_dataset.store()
        ```

    Example: Add an entity reference to a dataset.
        &nbsp;

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Dataset, EntityRef

        syn = Synapse()
        syn.login()

        my_dataset = Dataset(id="syn1234").get()
        my_dataset.add_item(EntityRef(id="syn1237", version=1))
        my_dataset.store()
        ```
    """
    from synapseclient.models import File, Folder

    client = Synapse.get_client(synapse_client=synapse_client)

    if isinstance(item, EntityRef):
        self._append_entity_ref(entity_ref=item)
    elif isinstance(item, File):
        if not item.version_number:
            item = File(
                id=item.id, version_number=item.version_number, download_file=False
            ).get()
        self._append_entity_ref(
            entity_ref=EntityRef(id=item.id, version=item.version_number)
        )
    elif isinstance(item, Folder):
        children = item._retrieve_children(follow_link=True)
        for child in children:
            if child["type"] == concrete_types.FILE_ENTITY:
                self._append_entity_ref(
                    entity_ref=EntityRef(
                        id=child["id"], version=child["versionNumber"]
                    )
                )
            else:
                self.add_item(item=Folder(id=child["id"]), synapse_client=client)
    else:
        raise ValueError(
            f"item must be one of EntityRef, File, or Folder. {item} is a {type(item)}"
        )

remove_item

remove_item(item: Union[EntityRef, File, Folder], *, synapse_client: Optional[Synapse] = None) -> None

Removes an item from the dataset. For Folders, all children of the folder are removed recursively. Effect is not seen until the dataset is stored.

PARAMETER DESCRIPTION
item

The Synapse ID or Entity to remove from the dataset

TYPE: Union[EntityRef, File, Folder]

synapse_client

If not passed in and caching was not disabled by Synapse.allow_client_caching(False) this will use the last created instance from the Synapse class constructor.

TYPE: Optional[Synapse] DEFAULT: None

RETURNS DESCRIPTION
None

None

RAISES DESCRIPTION
ValueError

If the item is not a valid type

Remove a file from a dataset.

 

from synapseclient import Synapse
from synapseclient.models import Dataset, File

syn = Synapse()
syn.login()

my_dataset = Dataset(id="syn1234").get()
my_dataset.remove_item(File(id="syn1235"))
my_dataset.store()
Remove a folder from a dataset.

All child files are recursively removed from the dataset.

from synapseclient import Synapse
from synapseclient.models import Dataset, Folder

syn = Synapse()
syn.login()

my_dataset = Dataset(id="syn1234").get()
my_dataset.remove_item(Folder(id="syn1236"))
my_dataset.store()
Remove an entity reference from a dataset.

 

from synapseclient import Synapse
from synapseclient.models import Dataset, EntityRef

syn = Synapse()
syn.login()

my_dataset = Dataset(id="syn1234").get()
my_dataset.remove_item(EntityRef(id="syn1237", version=1))
my_dataset.store()

Source code in synapseclient/models/dataset.py
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
def remove_item(
    self,
    item: Union[EntityRef, "File", "Folder"],
    *,
    synapse_client: Optional[Synapse] = None,
) -> None:
    """
    Removes an item from the dataset. For Folders, all
    children of the folder are removed recursively.
    Effect is not seen until the dataset is stored.

    Arguments:
        item: The Synapse ID or Entity to remove from the dataset
        synapse_client: If not passed in and caching was not disabled by
            `Synapse.allow_client_caching(False)` this will use the last created
            instance from the Synapse class constructor.

    Returns:
        None

    Raises:
        ValueError: If the item is not a valid type

    Example: Remove a file from a dataset.
        &nbsp;

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Dataset, File

        syn = Synapse()
        syn.login()

        my_dataset = Dataset(id="syn1234").get()
        my_dataset.remove_item(File(id="syn1235"))
        my_dataset.store()
        ```

    Example: Remove a folder from a dataset.
        All child files are recursively removed from the dataset.

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Dataset, Folder

        syn = Synapse()
        syn.login()

        my_dataset = Dataset(id="syn1234").get()
        my_dataset.remove_item(Folder(id="syn1236"))
        my_dataset.store()
        ```

    Example: Remove an entity reference from a dataset.
        &nbsp;
        ```python
        from synapseclient import Synapse
        from synapseclient.models import Dataset, EntityRef

        syn = Synapse()
        syn.login()

        my_dataset = Dataset(id="syn1234").get()
        my_dataset.remove_item(EntityRef(id="syn1237", version=1))
        my_dataset.store()
        ```
    """
    from synapseclient.models import File, Folder

    client = Synapse.get_client(synapse_client=synapse_client)

    if isinstance(item, EntityRef):
        self._remove_entity_ref(item)
    elif isinstance(item, File):
        if not item.version_number:
            item = File(
                id=item.id, version_number=item.version_number, download_file=False
            ).get()
        self._remove_entity_ref(EntityRef(id=item.id, version=item.version_number))
    elif isinstance(item, Folder):
        children = item._retrieve_children(follow_link=True)
        for child in children:
            if child["type"] == concrete_types.FILE_ENTITY:
                self._remove_entity_ref(
                    EntityRef(id=child["id"], version=child["versionNumber"])
                )
            else:
                self.remove_item(item=Folder(id=child["id"]), synapse_client=client)
    else:
        raise ValueError(
            f"item must be one of str, EntityRef, File, or Folder, {item} is a {type(item)}"
        )

store

store(dry_run: bool = False, *, job_timeout: int = 600, synapse_client: Optional[Synapse] = None) -> Self

Store information about a Dataset including the columns and annotations. Storing an update to the Datatset items will alter the rows present in the Dataset.

Datasets have default columns that are managed by Synapse. The default behavior of this function is to include these default columns in the dataset when it is stored. This means that with the default behavior, any columns that you have added to your Dataset will be overwritten by the default columns if they have the same name. To avoid this behavior, set the include_default_columns attribute to False.

Note the following behavior for the order of columns:

  • If a column is added via the add_column method it will be added at the index you specify, or at the end of the columns list.
  • If column(s) are added during the construction of your Dataset instance, ie. Dataset(columns=[Column(name="foo")]), they will be added at the beginning of the columns list.
  • If you use the store_rows method and the schema_storage_strategy is set to INFER_FROM_DATA the columns will be added at the end of the columns list.
PARAMETER DESCRIPTION
dry_run

If True, will not actually store the table but will log to the console what would have been stored.

TYPE: bool DEFAULT: False

job_timeout

The maximum amount of time to wait for a job to complete. This is used when updating the table schema. If the timeout is reached a SynapseTimeoutError will be raised. The default is 600 seconds

TYPE: int DEFAULT: 600

synapse_client

If not passed in and caching was not disabled by Synapse.allow_client_caching(False) this will use the last created instance from the Synapse class constructor.

TYPE: Optional[Synapse] DEFAULT: None

RETURNS DESCRIPTION
Self

The Dataset instance stored in synapse.

Create a new dataset from a list of EntityRefs by storing it.

 

from synapseclient import Synapse
from synapseclient.models import Dataset, EntityRef

syn = Synapse()
syn.login()

my_entity_refs = [EntityRef(id="syn1234"), EntityRef(id="syn1235"), EntityRef(id="syn1236")]
my_dataset = Dataset(parent_id="syn987", name="my-new-dataset", items=my_entity_refs)
my_dataset.store()
Source code in synapseclient/models/dataset.py
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
def store(
    self,
    dry_run: bool = False,
    *,
    job_timeout: int = 600,
    synapse_client: Optional[Synapse] = None,
) -> "Self":
    """Store information about a Dataset including the columns and annotations.
    Storing an update to the Datatset items will alter the rows present in the Dataset.

    Datasets have default columns that are managed by Synapse. The default behavior of
    this function is to include these default columns in the dataset when it is stored.
    This means that with the default behavior, any columns that you have added to your
    Dataset will be overwritten by the default columns if they have the same name. To
    avoid this behavior, set the `include_default_columns` attribute to `False`.

    Note the following behavior for the order of columns:

    - If a column is added via the `add_column` method it will be added at the
        index you specify, or at the end of the columns list.
    - If column(s) are added during the construction of your Dataset instance, ie.
        `Dataset(columns=[Column(name="foo")])`, they will be added at the beginning
        of the columns list.
    - If you use the `store_rows` method and the `schema_storage_strategy` is set to
        `INFER_FROM_DATA` the columns will be added at the end of the columns list.

    Arguments:
        dry_run: If True, will not actually store the table but will log to
            the console what would have been stored.
        job_timeout: The maximum amount of time to wait for a job to complete.
            This is used when updating the table schema. If the timeout
            is reached a `SynapseTimeoutError` will be raised.
            The default is 600 seconds
        synapse_client: If not passed in and caching was not disabled by
            `Synapse.allow_client_caching(False)` this will use the last created
            instance from the Synapse class constructor.

    Returns:
        The Dataset instance stored in synapse.

    Example: Create a new dataset from a list of EntityRefs by storing it.
        &nbsp;

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Dataset, EntityRef

        syn = Synapse()
        syn.login()

        my_entity_refs = [EntityRef(id="syn1234"), EntityRef(id="syn1235"), EntityRef(id="syn1236")]
        my_dataset = Dataset(parent_id="syn987", name="my-new-dataset", items=my_entity_refs)
        my_dataset.store()
        ```
    """
    return self

get

get(include_columns: bool = True, include_activity: bool = False, *, synapse_client: Optional[Synapse] = None) -> Self

Get the metadata about the Dataset from synapse.

PARAMETER DESCRIPTION
include_columns

If True, will include fully filled column objects in the .columns attribute. Defaults to True.

TYPE: bool DEFAULT: True

include_activity

If True the activity will be included in the Dataset if it exists. Defaults to False.

TYPE: bool DEFAULT: False

synapse_client

If not passed in and caching was not disabled by Synapse.allow_client_caching(False) this will use the last created instance from the Synapse class constructor.

TYPE: Optional[Synapse] DEFAULT: None

RETURNS DESCRIPTION
Self

The Dataset instance stored in synapse.

Getting metadata about a Dataset using id

Get a Dataset by ID and print out the columns and activity. include_columns defaults to True and include_activity defaults to False. When you need to update existing columns or activity these need to be set to True during the get call, then you'll make the changes, and finally call the .store() method.

from synapseclient import Synapse
from synapseclient.models import Dataset

syn = Synapse()
syn.login()

dataset = Dataset(id="syn4567").get(include_activity=True)
print(dataset)

# Columns are retrieved by default
print(dataset.columns)
print(dataset.activity)
Getting metadata about a Dataset using name and parent_id

Get a Dataset by name/parent_id and print out the columns and activity. include_columns defaults to True and include_activity defaults to False. When you need to update existing columns or activity these need to be set to True during the get call, then you'll make the changes, and finally call the .store() method.

from synapseclient import Synapse
from synapseclient.models import Dataset

syn = Synapse()
syn.login()

dataset = Dataset(name="my_dataset", parent_id="syn1234").get(include_columns=True, include_activity=True)
print(dataset)
print(dataset.columns)
print(dataset.activity)
Source code in synapseclient/models/dataset.py
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
def get(
    self,
    include_columns: bool = True,
    include_activity: bool = False,
    *,
    synapse_client: Optional[Synapse] = None,
) -> "Self":
    """Get the metadata about the Dataset from synapse.

    Arguments:
        include_columns: If True, will include fully filled column objects in the
            `.columns` attribute. Defaults to True.
        include_activity: If True the activity will be included in the Dataset
            if it exists. Defaults to False.

        synapse_client: If not passed in and caching was not disabled by
            `Synapse.allow_client_caching(False)` this will use the last created
            instance from the Synapse class constructor.

    Returns:
        The Dataset instance stored in synapse.

    Example: Getting metadata about a Dataset using id
        Get a Dataset by ID and print out the columns and activity. `include_columns`
        defaults to True and `include_activity` defaults to False. When you need to
        update existing columns or activity these need to be set to True during the
        `get` call, then you'll make the changes, and finally call the
        `.store()` method.

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Dataset

        syn = Synapse()
        syn.login()

        dataset = Dataset(id="syn4567").get(include_activity=True)
        print(dataset)

        # Columns are retrieved by default
        print(dataset.columns)
        print(dataset.activity)
        ```

    Example: Getting metadata about a Dataset using name and parent_id
        Get a Dataset by name/parent_id and print out the columns and activity.
        `include_columns` defaults to True and `include_activity` defaults to
        False. When you need to update existing columns or activity these need to
        be set to True during the `get` call, then you'll make the changes,
        and finally call the `.store()` method.

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Dataset

        syn = Synapse()
        syn.login()

        dataset = Dataset(name="my_dataset", parent_id="syn1234").get(include_columns=True, include_activity=True)
        print(dataset)
        print(dataset.columns)
        print(dataset.activity)
        ```
    """
    return self

delete

delete(*, synapse_client: Optional[Synapse] = None) -> None

Delete the dataset from synapse. This is not version specific. If you'd like to delete a specific version of the dataset you must use the synapseclient.api.delete_entity function directly.

PARAMETER DESCRIPTION
synapse_client

If not passed in and caching was not disabled by Synapse.allow_client_caching(False) this will use the last created instance from the Synapse class constructor.

TYPE: Optional[Synapse] DEFAULT: None

RETURNS DESCRIPTION
None

None

Deleting a dataset

Deleting a dataset is only supported by the ID of the dataset.

from synapseclient import Synapse
from synapseclient.models import Dataset

syn = Synapse()
syn.login()

Dataset(id="syn4567").delete()
Source code in synapseclient/models/dataset.py
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
def delete(self, *, synapse_client: Optional[Synapse] = None) -> None:
    """Delete the dataset from synapse. This is not version specific. If you'd like
    to delete a specific version of the dataset you must use the
    [synapseclient.api.delete_entity][] function directly.

    Arguments:
        synapse_client: If not passed in and caching was not disabled by
            `Synapse.allow_client_caching(False)` this will use the last created
            instance from the Synapse class constructor.

    Returns:
        None

    Example: Deleting a dataset
        Deleting a dataset is only supported by the ID of the dataset.

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Dataset

        syn = Synapse()
        syn.login()

        Dataset(id="syn4567").delete()
        ```
    """
    return None

update_rows

update_rows(values: DATA_FRAME_TYPE, primary_keys: List[str], dry_run: bool = False, *, rows_per_query: int = 50000, update_size_bytes: int = 1.9 * MB, insert_size_bytes: int = 900 * MB, job_timeout: int = 600, wait_for_eventually_consistent_view: bool = False, wait_for_eventually_consistent_view_timeout: int = 600, synapse_client: Optional[Synapse] = None, **kwargs) -> None

Update the values of rows in the dataset. This method can only be used to update values in custom columns. Default columns cannot be updated, but may be used as primary keys.

Limitations:

  • When updating many rows the requests to Synapse will be chunked into smaller requests. The limit is 2MB per request. This chunking will happen automatically and should not be a concern for most users. If you are having issues with the request being too large you may lower the number of rows you are trying to update.
  • The primary_keys argument must contain at least one column.
  • The primary_keys argument cannot contain columns that are a LIST type.
  • The primary_keys argument cannot contain columns that are a JSON type.
  • The values used as the primary_keys must be unique in the table. If there are multiple rows with the same values in the primary_keys the behavior is that an exception will be raised.
  • The columns used in primary_keys cannot contain updated values. Since the values in these columns are used to determine if a row exists, they cannot be updated in the same transaction.
PARAMETER DESCRIPTION
values

Supports storing data from the following sources:

  • A string holding the path to a CSV file. The data will be read into a Pandas DataFrame. The code makes assumptions about the format of the columns in the CSV as detailed in the csv_to_pandas_df function. You may pass in additional arguments to the csv_to_pandas_df function by passing them in as keyword arguments to this function.
  • A dictionary where the key is the column name and the value is one or more values. The values will be wrapped into a Pandas DataFrame. You may pass in additional arguments to the pd.DataFrame function by passing them in as keyword arguments to this function. Read about the available arguments in the Pandas DataFrame documentation.
  • A Pandas DataFrame

TYPE: DATA_FRAME_TYPE

primary_keys

The columns to use to determine if a row already exists. If a row exists with the same values in the columns specified in this list the row will be updated. If a row does not exist nothing will be done.

TYPE: List[str]

dry_run

If set to True the data will not be updated in Synapse. A message will be printed to the console with the number of rows that would have been updated and inserted. If you would like to see the data that would be updated and inserted you may set the dry_run argument to True and set the log level to DEBUG by setting the debug flag when creating your Synapse class instance like: syn = Synapse(debug=True).

TYPE: bool DEFAULT: False

rows_per_query

The number of rows that will be queried from Synapse per request. Since we need to query for the data that is being updated this will determine the number of rows that are queried at a time. The default is 50,000 rows.

TYPE: int DEFAULT: 50000

update_size_bytes

The maximum size of the request that will be sent to Synapse when updating rows of data. The default is 1.9MB.

TYPE: int DEFAULT: 1.9 * MB

insert_size_bytes

The maximum size of the request that will be sent to Synapse when inserting rows of data. The default is 900MB.

TYPE: int DEFAULT: 900 * MB

job_timeout

The maximum amount of time to wait for a job to complete. This is used when inserting, and updating rows of data. Each individual request to Synapse will be sent as an independent job. If the timeout is reached a SynapseTimeoutError will be raised. The default is 600 seconds

TYPE: int DEFAULT: 600

wait_for_eventually_consistent_view

Only used if the table is a view. If set to True this will wait for the view to reflect any changes that you've made to the view. This is useful if you need to query the view after making changes to the data.

TYPE: bool DEFAULT: False

wait_for_eventually_consistent_view_timeout

The maximum amount of time to wait for a view to be eventually consistent. The default is 600 seconds.

TYPE: int DEFAULT: 600

synapse_client

If not passed in and caching was not disabled by Synapse.allow_client_caching(False) this will use the last created instance from the Synapse class constructor

TYPE: Optional[Synapse] DEFAULT: None

**kwargs

Additional arguments that are passed to the pd.DataFrame function when the values argument is a path to a csv file.

DEFAULT: {}

Update custom column values in a dataset.

 

from synapseclient import Synapse
from synapseclient.models import Dataset

syn = Synapse()
syn.login()

my_dataset = Dataset(id="syn1234").get()

# my_annotation must already exist in the dataset as a custom column
modified_data = pd.DataFrame(
    {"id": ["syn1234"], "my_annotation": ["good data"]}
)
my_dataset.update_rows(values=modified_data, primary_keys=["id"], dry_run=False)
Source code in synapseclient/models/dataset.py
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
def update_rows(
    self,
    values: DATA_FRAME_TYPE,
    primary_keys: List[str],
    dry_run: bool = False,
    *,
    rows_per_query: int = 50000,
    update_size_bytes: int = 1.9 * MB,
    insert_size_bytes: int = 900 * MB,
    job_timeout: int = 600,
    wait_for_eventually_consistent_view: bool = False,
    wait_for_eventually_consistent_view_timeout: int = 600,
    synapse_client: Optional[Synapse] = None,
    **kwargs,
) -> None:
    """Update the values of rows in the dataset. This method can only
    be used to update values in custom columns. Default columns cannot be updated, but
    may be used as primary keys.

    Limitations:

    - When updating many rows the requests to Synapse will be chunked into smaller
        requests. The limit is 2MB per request. This chunking will happen
        automatically and should not be a concern for most users. If you are
        having issues with the request being too large you may lower the
        number of rows you are trying to update.
    - The `primary_keys` argument must contain at least one column.
    - The `primary_keys` argument cannot contain columns that are a LIST type.
    - The `primary_keys` argument cannot contain columns that are a JSON type.
    - The values used as the `primary_keys` must be unique in the table. If there
        are multiple rows with the same values in the `primary_keys` the behavior
        is that an exception will be raised.
    - The columns used in `primary_keys` cannot contain updated values. Since
        the values in these columns are used to determine if a row exists, they
        cannot be updated in the same transaction.

    Arguments:
        values: Supports storing data from the following sources:

            - A string holding the path to a CSV file. The data will be read into a
                [Pandas DataFrame](http://pandas.pydata.org/pandas-docs/stable/api.html#dataframe).
                The code makes assumptions about the format of the columns in the
                CSV as detailed in the [csv_to_pandas_df][synapseclient.models.mixins.table_components.csv_to_pandas_df]
                function. You may pass in additional arguments to the `csv_to_pandas_df`
                function by passing them in as keyword arguments to this function.
            - A dictionary where the key is the column name and the value is one or
                more values. The values will be wrapped into a [Pandas DataFrame](http://pandas.pydata.org/pandas-docs/stable/api.html#dataframe). You may pass in additional arguments to the `pd.DataFrame` function by passing them in as keyword arguments to this function. Read about the available arguments in the [Pandas DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) documentation.
            - A [Pandas DataFrame](http://pandas.pydata.org/pandas-docs/stable/api.html#dataframe)

        primary_keys: The columns to use to determine if a row already exists. If
            a row exists with the same values in the columns specified in this list
            the row will be updated. If a row does not exist nothing will be done.

        dry_run: If set to True the data will not be updated in Synapse. A message
            will be printed to the console with the number of rows that would have
            been updated and inserted. If you would like to see the data that would
            be updated and inserted you may set the `dry_run` argument to True and
            set the log level to DEBUG by setting the debug flag when creating
            your Synapse class instance like: `syn = Synapse(debug=True)`.

        rows_per_query: The number of rows that will be queried from Synapse per
            request. Since we need to query for the data that is being updated
            this will determine the number of rows that are queried at a time.
            The default is 50,000 rows.

        update_size_bytes: The maximum size of the request that will be sent to Synapse
            when updating rows of data. The default is 1.9MB.

        insert_size_bytes: The maximum size of the request that will be sent to Synapse
            when inserting rows of data. The default is 900MB.

        job_timeout: The maximum amount of time to wait for a job to complete.
            This is used when inserting, and updating rows of data. Each individual
            request to Synapse will be sent as an independent job. If the timeout
            is reached a `SynapseTimeoutError` will be raised.
            The default is 600 seconds

        wait_for_eventually_consistent_view: Only used if the table is a view. If
            set to True this will wait for the view to reflect any changes that
            you've made to the view. This is useful if you need to query the view
            after making changes to the data.

        wait_for_eventually_consistent_view_timeout: The maximum amount of time to
            wait for a view to be eventually consistent. The default is 600 seconds.

        synapse_client: If not passed in and caching was not disabled by
            `Synapse.allow_client_caching(False)` this will use the last created
            instance from the Synapse class constructor

        **kwargs: Additional arguments that are passed to the `pd.DataFrame`
            function when the `values` argument is a path to a csv file.


    Example: Update custom column values in a dataset.
        &nbsp;

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Dataset

        syn = Synapse()
        syn.login()

        my_dataset = Dataset(id="syn1234").get()

        # my_annotation must already exist in the dataset as a custom column
        modified_data = pd.DataFrame(
            {"id": ["syn1234"], "my_annotation": ["good data"]}
        )
        my_dataset.update_rows(values=modified_data, primary_keys=["id"], dry_run=False)
        ```
    """
    return None

snapshot

snapshot(*, comment: Optional[str] = None, label: Optional[str] = None, include_activity: bool = True, associate_activity_to_new_version: bool = True, synapse_client: Optional[Synapse] = None) -> TableUpdateTransaction

Creates a snapshot of the dataset. A snapshot is a saved, read-only version of the dataset at the time it was created. Dataset snapshots are created using the asyncronous job API.

PARAMETER DESCRIPTION
comment

A unique comment to associate with the snapshot.

TYPE: Optional[str] DEFAULT: None

label

A unique label to associate with the snapshot.

TYPE: Optional[str] DEFAULT: None

include_activity

If True the activity will be included in snapshot if it exists. In order to include the activity, the activity must have already been stored in Synapse by using the activity attribute on the Dataset and calling the store() method on the Dataset instance. Adding an activity to a snapshot of a dataset is meant to capture the provenance of the data at the time of the snapshot. Defaults to True.

TYPE: bool DEFAULT: True

associate_activity_to_new_version

If True the activity will be associated with the new version of the dataset. If False the activity will not be associated with the new version of the dataset. Defaults to True.

TYPE: bool DEFAULT: True

synapse_client

If not passed in and caching was not disabled by Synapse.allow_client_caching(False) this will use the last created instance from the Synapse class constructor.

TYPE: Optional[Synapse] DEFAULT: None

RETURNS DESCRIPTION
TableUpdateTransaction

A TableUpdateTransaction object which includes the version number of the snapshot.

Save a snapshot of a dataset.

 

from synapseclient import Synapse
from synapseclient.models import Dataset

syn = Synapse()
syn.login()

my_dataset = Dataset(id="syn1234").get()
my_dataset.snapshot(comment="My first snapshot", label="My first snapshot")
Source code in synapseclient/models/dataset.py
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
def snapshot(
    self,
    *,
    comment: Optional[str] = None,
    label: Optional[str] = None,
    include_activity: bool = True,
    associate_activity_to_new_version: bool = True,
    synapse_client: Optional[Synapse] = None,
) -> "TableUpdateTransaction":
    """Creates a snapshot of the dataset. A snapshot is a saved, read-only version of the dataset
    at the time it was created. Dataset snapshots are created using the asyncronous job API.

    Arguments:
        comment: A unique comment to associate with the snapshot.
        label: A unique label to associate with the snapshot.
        include_activity: If True the activity will be included in snapshot if it
            exists. In order to include the activity, the activity must have already
            been stored in Synapse by using the `activity` attribute on the Dataset
            and calling the `store()` method on the Dataset instance. Adding an
            activity to a snapshot of a dataset is meant to capture the provenance of
            the data at the time of the snapshot. Defaults to True.
        associate_activity_to_new_version: If True the activity will be associated
            with the new version of the dataset. If False the activity will not be
            associated with the new version of the dataset. Defaults to True.
        synapse_client: If not passed in and caching was not disabled by
            `Synapse.allow_client_caching(False)` this will use the last created
            instance from the Synapse class constructor.

    Returns:
        A `TableUpdateTransaction` object which includes the version number of the snapshot.

    Example: Save a snapshot of a dataset.
        &nbsp;

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Dataset

        syn = Synapse()
        syn.login()

        my_dataset = Dataset(id="syn1234").get()
        my_dataset.snapshot(comment="My first snapshot", label="My first snapshot")
        ```
    """
    return TableUpdateTransaction

query staticmethod

query(query: str, include_row_id_and_row_version: bool = True, convert_to_datetime: bool = False, download_location=None, quote_character='"', escape_character='\\', line_end=str(linesep), separator=',', header=True, *, synapse_client: Optional[Synapse] = None, **kwargs) -> Union[DATA_FRAME_TYPE, str]

Query for data on a table stored in Synapse. The results will always be returned as a Pandas DataFrame unless you specify a download_location in which case the results will be downloaded to that location. There are a number of arguments that you may pass to this function depending on if you are getting the results back as a DataFrame or downloading the results to a file.

PARAMETER DESCRIPTION
query

The query to run. The query must be valid syntax that Synapse can understand. See this document that describes the expected syntax of the query: https://rest-docs.synapse.org/rest/org/sagebionetworks/repo/web/controller/TableExamples.html

TYPE: str

include_row_id_and_row_version

If True the ROW_ID and ROW_VERSION columns will be returned in the DataFrame. These columns are required if using the query results to update rows in the table. These columns are the primary keys used by Synapse to uniquely identify rows in the table.

TYPE: bool DEFAULT: True

convert_to_datetime

(DataFrame only) If set to True, will convert all Synapse DATE columns from UNIX timestamp integers into UTC datetime objects

TYPE: bool DEFAULT: False

download_location

(CSV Only) If set to a path the results will be downloaded to that directory. The results will be downloaded as a CSV file. A path to the downloaded file will be returned instead of a DataFrame.

DEFAULT: None

quote_character

(CSV Only) The character to use to quote fields. The default is a double quote.

DEFAULT: '"'

escape_character

(CSV Only) The character to use to escape special characters. The default is a backslash.

DEFAULT: '\\'

line_end

(CSV Only) The character to use to end a line. The default is the system's line separator.

DEFAULT: str(linesep)

separator

(CSV Only) The character to use to separate fields. The default is a comma.

DEFAULT: ','

header

(CSV Only) If set to True the first row will be used as the header row. The default is True.

DEFAULT: True

**kwargs

(DataFrame only) Additional keyword arguments to pass to pandas.read_csv. See https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html for complete list of supported arguments. This is exposed as internally the query downloads a CSV from Synapse and then loads it into a dataframe.

DEFAULT: {}

synapse_client

If not passed in and caching was not disabled by Synapse.allow_client_caching(False) this will use the last created instance from the Synapse class constructor.

TYPE: Optional[Synapse] DEFAULT: None

RETURNS DESCRIPTION
Union[DATA_FRAME_TYPE, str]

The results of the query as a Pandas DataFrame or a path to the downloaded

Union[DATA_FRAME_TYPE, str]

query results if download_location is set.

Querying for data

This example shows how you may query for data in a table and print out the results.

from synapseclient import Synapse
from synapseclient.models import query

syn = Synapse()
syn.login()

results = query(query="SELECT * FROM syn1234")
print(results)
Source code in synapseclient/models/mixins/table_components.py
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
@staticmethod
def query(
    query: str,
    include_row_id_and_row_version: bool = True,
    convert_to_datetime: bool = False,
    download_location=None,
    quote_character='"',
    escape_character="\\",
    line_end=str(os.linesep),
    separator=",",
    header=True,
    *,
    synapse_client: Optional[Synapse] = None,
    **kwargs,
) -> Union[DATA_FRAME_TYPE, str]:
    """Query for data on a table stored in Synapse. The results will always be
    returned as a Pandas DataFrame unless you specify a `download_location` in which
    case the results will be downloaded to that location. There are a number of
    arguments that you may pass to this function depending on if you are getting
    the results back as a DataFrame or downloading the results to a file.

    Arguments:
        query: The query to run. The query must be valid syntax that Synapse can
            understand. See this document that describes the expected syntax of the
            query:
            <https://rest-docs.synapse.org/rest/org/sagebionetworks/repo/web/controller/TableExamples.html>
        include_row_id_and_row_version: If True the `ROW_ID` and `ROW_VERSION`
            columns will be returned in the DataFrame. These columns are required
            if using the query results to update rows in the table. These columns
            are the primary keys used by Synapse to uniquely identify rows in the
            table.
        convert_to_datetime: (DataFrame only) If set to True, will convert all
            Synapse DATE columns from UNIX timestamp integers into UTC datetime
            objects

        download_location: (CSV Only) If set to a path the results will be
            downloaded to that directory. The results will be downloaded as a CSV
            file. A path to the downloaded file will be returned instead of a
            DataFrame.

        quote_character: (CSV Only) The character to use to quote fields. The
            default is a double quote.

        escape_character: (CSV Only) The character to use to escape special
            characters. The default is a backslash.

        line_end: (CSV Only) The character to use to end a line. The default is
            the system's line separator.

        separator: (CSV Only) The character to use to separate fields. The default
            is a comma.

        header: (CSV Only) If set to True the first row will be used as the header
            row. The default is True.

        **kwargs: (DataFrame only) Additional keyword arguments to pass to
            pandas.read_csv. See
            <https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html>
            for complete list of supported arguments. This is exposed as
            internally the query downloads a CSV from Synapse and then loads
            it into a dataframe.
        synapse_client: If not passed in and caching was not disabled by
            `Synapse.allow_client_caching(False)` this will use the last created
            instance from the Synapse class constructor.

    Returns:
        The results of the query as a Pandas DataFrame or a path to the downloaded
        query results if `download_location` is set.

    Example: Querying for data
        This example shows how you may query for data in a table and print out the
        results.

        ```python
        from synapseclient import Synapse
        from synapseclient.models import query

        syn = Synapse()
        syn.login()

        results = query(query="SELECT * FROM syn1234")
        print(results)
        ```
    """
    # Replaced at runtime
    return ""

query_part_mask staticmethod

query_part_mask(query: str, part_mask: int, *, synapse_client: Optional[Synapse] = None) -> QueryResultBundle

Query for data on a table stored in Synapse. This is a more advanced use case of the query function that allows you to determine what addiitional metadata about the table or query should also be returned. If you do not need this additional information then you are better off using the query function.

The query for this method uses this Rest API: https://rest-docs.synapse.org/rest/POST/entity/id/table/query/async/start.html

PARAMETER DESCRIPTION
query

The query to run. The query must be valid syntax that Synapse can understand. See this document that describes the expected syntax of the query: https://rest-docs.synapse.org/rest/org/sagebionetworks/repo/web/controller/TableExamples.html

TYPE: str

part_mask

The bitwise OR of the part mask values you want to return in the results. The following list of part masks are implemented to be returned in the results:

  • Query Results (queryResults) = 0x1
  • Query Count (queryCount) = 0x2
  • The sum of the file sizes (sumFileSizesBytes) = 0x40
  • The last updated on date of the table (lastUpdatedOn) = 0x80

TYPE: int

synapse_client

If not passed in and caching was not disabled by Synapse.allow_client_caching(False) this will use the last created instance from the Synapse class constructor.

TYPE: Optional[Synapse] DEFAULT: None

RETURNS DESCRIPTION
QueryResultBundle

The results of the query as a Pandas DataFrame.

Querying for data with a part mask

This example shows how to use the bitwise OR of Python to combine the part mask values and then use that to query for data in a table and print out the results.

In this case we are getting the results of the query, the count of rows, and the last updated on date of the table.

from synapseclient import Synapse
from synapseclient.models import query_part_mask

syn = Synapse()
syn.login()

QUERY_RESULTS = 0x1
QUERY_COUNT = 0x2
LAST_UPDATED_ON = 0x80

# Combine the part mask values using bitwise OR
part_mask = QUERY_RESULTS | QUERY_COUNT | LAST_UPDATED_ON

result = query_part_mask(query="SELECT * FROM syn1234", part_mask=part_mask)
print(result)
Source code in synapseclient/models/mixins/table_components.py
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
@staticmethod
def query_part_mask(
    query: str,
    part_mask: int,
    *,
    synapse_client: Optional[Synapse] = None,
) -> QueryResultBundle:
    """Query for data on a table stored in Synapse. This is a more advanced use case
    of the `query` function that allows you to determine what addiitional metadata
    about the table or query should also be returned. If you do not need this
    additional information then you are better off using the `query` function.

    The query for this method uses this Rest API:
    <https://rest-docs.synapse.org/rest/POST/entity/id/table/query/async/start.html>

    Arguments:
        query: The query to run. The query must be valid syntax that Synapse can
            understand. See this document that describes the expected syntax of the
            query:
            <https://rest-docs.synapse.org/rest/org/sagebionetworks/repo/web/controller/TableExamples.html>
        part_mask: The bitwise OR of the part mask values you want to return in the
            results. The following list of part masks are implemented to be returned
            in the results:

            - Query Results (queryResults) = 0x1
            - Query Count (queryCount) = 0x2
            - The sum of the file sizes (sumFileSizesBytes) = 0x40
            - The last updated on date of the table (lastUpdatedOn) = 0x80

        synapse_client: If not passed in and caching was not disabled by
            `Synapse.allow_client_caching(False)` this will use the last created
            instance from the Synapse class constructor.

    Returns:
        The results of the query as a Pandas DataFrame.

    Example: Querying for data with a part mask
        This example shows how to use the bitwise `OR` of Python to combine the
        part mask values and then use that to query for data in a table and print
        out the results.

        In this case we are getting the results of the query, the count of rows, and
        the last updated on date of the table.

        ```python
        from synapseclient import Synapse
        from synapseclient.models import query_part_mask

        syn = Synapse()
        syn.login()

        QUERY_RESULTS = 0x1
        QUERY_COUNT = 0x2
        LAST_UPDATED_ON = 0x80

        # Combine the part mask values using bitwise OR
        part_mask = QUERY_RESULTS | QUERY_COUNT | LAST_UPDATED_ON

        result = query_part_mask(query="SELECT * FROM syn1234", part_mask=part_mask)
        print(result)
        ```
    """
    # Replaced at runtime
    return QueryResultBundle(result=None)

add_column

add_column(column: Union[Column, List[Column]], index: int = None) -> None

Add column(s) to the table. Note that this does not store the column(s) in Synapse. You must call the .store() function on this table class instance to store the column(s) in Synapse. This is a convenience function to eliminate the need to manually add the column(s) to the dictionary.

This function will add an item to the .columns attribute of this class instance. .columns is a dictionary where the key is the name of the column and the value is the Column object.

PARAMETER DESCRIPTION
column

The column(s) to add, may be a single Column object or a list of Column objects.

TYPE: Union[Column, List[Column]]

index

The index to insert the column at. If not passed in the column will be added to the end of the list.

TYPE: int DEFAULT: None

RETURNS DESCRIPTION
None

None

Adding a single column

This example shows how you may add a single column to a table and then store the change back in Synapse.

from synapseclient import Synapse
from synapseclient.models import Column, ColumnType, Table

syn = Synapse()
syn.login()

table = Table(
    id="syn1234"
).get(include_columns=True)

table.add_column(
    Column(name="my_column", column_type=ColumnType.STRING)
)
table.store()
Adding multiple columns

This example shows how you may add multiple columns to a table and then store the change back in Synapse.

from synapseclient import Synapse
from synapseclient.models import Column, ColumnType, Table

syn = Synapse()
syn.login()

table = Table(
    id="syn1234"
).get(include_columns=True)

table.add_column([
    Column(name="my_column", column_type=ColumnType.STRING),
    Column(name="my_column2", column_type=ColumnType.INTEGER),
])
table.store()
Adding a column at a specific index

This example shows how you may add a column at a specific index to a table and then store the change back in Synapse. If the index is out of bounds the column will be added to the end of the list.

from synapseclient import Synapse
from synapseclient.models import Column, ColumnType, Table

syn = Synapse()
syn.login()

table = Table(
    id="syn1234"
).get(include_columns=True)

table.add_column(
    Column(name="my_column", column_type=ColumnType.STRING),
    # Add the column at the beginning of the list
    index=0
)
table.store()
Adding a single column (async)

This example shows how you may add a single column to a table and then store the change back in Synapse.

import asyncio
from synapseclient import Synapse
from synapseclient.models import Column, ColumnType, Table

syn = Synapse()
syn.login()

async def main():
    table = await Table(
        id="syn1234"
    ).get_async(include_columns=True)

    table.add_column(
        Column(name="my_column", column_type=ColumnType.STRING)
    )
    await table.store_async()

asyncio.run(main())
Adding multiple columns (async)

This example shows how you may add multiple columns to a table and then store the change back in Synapse.

import asyncio
from synapseclient import Synapse
from synapseclient.models import Column, ColumnType, Table

syn = Synapse()
syn.login()

async def main():
    table = await Table(
        id="syn1234"
    ).get_async(include_columns=True)

    table.add_column([
        Column(name="my_column", column_type=ColumnType.STRING),
        Column(name="my_column2", column_type=ColumnType.INTEGER),
    ])
    await table.store_async()

asyncio.run(main())
Adding a column at a specific index (async)

This example shows how you may add a column at a specific index to a table and then store the change back in Synapse. If the index is out of bounds the column will be added to the end of the list.

import asyncio
from synapseclient import Synapse
from synapseclient.models import Column, ColumnType, Table

syn = Synapse()
syn.login()

async def main():
    table = await Table(
        id="syn1234"
    ).get_async(include_columns=True)

    table.add_column(
        Column(name="my_column", column_type=ColumnType.STRING),
        # Add the column at the beginning of the list
        index=0
    )
    await table.store_async()

asyncio.run(main())
Source code in synapseclient/models/mixins/table_components.py
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
def add_column(
    self, column: Union["Column", List["Column"]], index: int = None
) -> None:
    """Add column(s) to the table. Note that this does not store the column(s) in
    Synapse. You must call the `.store()` function on this table class instance to
    store the column(s) in Synapse. This is a convenience function to eliminate
    the need to manually add the column(s) to the dictionary.


    This function will add an item to the `.columns` attribute of this class
    instance. `.columns` is a dictionary where the key is the name of the column
    and the value is the Column object.

    Arguments:
        column: The column(s) to add, may be a single Column object or a list of
            Column objects.
        index: The index to insert the column at. If not passed in the column will
            be added to the end of the list.

    Returns:
        None

    Example: Adding a single column
        This example shows how you may add a single column to a table and then store
        the change back in Synapse.

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Column, ColumnType, Table

        syn = Synapse()
        syn.login()

        table = Table(
            id="syn1234"
        ).get(include_columns=True)

        table.add_column(
            Column(name="my_column", column_type=ColumnType.STRING)
        )
        table.store()
        ```


    Example: Adding multiple columns
        This example shows how you may add multiple columns to a table and then store
        the change back in Synapse.

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Column, ColumnType, Table

        syn = Synapse()
        syn.login()

        table = Table(
            id="syn1234"
        ).get(include_columns=True)

        table.add_column([
            Column(name="my_column", column_type=ColumnType.STRING),
            Column(name="my_column2", column_type=ColumnType.INTEGER),
        ])
        table.store()
        ```

    Example: Adding a column at a specific index
        This example shows how you may add a column at a specific index to a table
        and then store the change back in Synapse. If the index is out of bounds the
        column will be added to the end of the list.

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Column, ColumnType, Table

        syn = Synapse()
        syn.login()

        table = Table(
            id="syn1234"
        ).get(include_columns=True)

        table.add_column(
            Column(name="my_column", column_type=ColumnType.STRING),
            # Add the column at the beginning of the list
            index=0
        )
        table.store()
        ```

    Example: Adding a single column (async)
        This example shows how you may add a single column to a table and then store
        the change back in Synapse.

        ```python
        import asyncio
        from synapseclient import Synapse
        from synapseclient.models import Column, ColumnType, Table

        syn = Synapse()
        syn.login()

        async def main():
            table = await Table(
                id="syn1234"
            ).get_async(include_columns=True)

            table.add_column(
                Column(name="my_column", column_type=ColumnType.STRING)
            )
            await table.store_async()

        asyncio.run(main())
        ```

    Example: Adding multiple columns (async)
        This example shows how you may add multiple columns to a table and then store
        the change back in Synapse.

        ```python
        import asyncio
        from synapseclient import Synapse
        from synapseclient.models import Column, ColumnType, Table

        syn = Synapse()
        syn.login()

        async def main():
            table = await Table(
                id="syn1234"
            ).get_async(include_columns=True)

            table.add_column([
                Column(name="my_column", column_type=ColumnType.STRING),
                Column(name="my_column2", column_type=ColumnType.INTEGER),
            ])
            await table.store_async()

        asyncio.run(main())
        ```

    Example: Adding a column at a specific index (async)
        This example shows how you may add a column at a specific index to a table
        and then store the change back in Synapse. If the index is out of bounds the
        column will be added to the end of the list.

        ```python
        import asyncio
        from synapseclient import Synapse
        from synapseclient.models import Column, ColumnType, Table

        syn = Synapse()
        syn.login()

        async def main():
            table = await Table(
                id="syn1234"
            ).get_async(include_columns=True)

            table.add_column(
                Column(name="my_column", column_type=ColumnType.STRING),
                # Add the column at the beginning of the list
                index=0
            )
            await table.store_async()

        asyncio.run(main())
        ```
    """
    if not self._last_persistent_instance:
        raise ValueError(
            "This method is only supported after interacting with Synapse via a `.get()` or `.store()` operation"
        )

    if index is not None:
        if isinstance(column, list):
            columns_to_insert = []
            for i, col in enumerate(column):
                if col.name in self.columns:
                    raise ValueError(f"Duplicate column name: {col.name}")
                columns_to_insert.append((col.name, col))
            insert_index = min(index, len(self.columns))
            self.columns = OrderedDict(
                list(self.columns.items())[:insert_index]
                + columns_to_insert
                + list(self.columns.items())[insert_index:]
            )
        else:
            if column.name in self.columns:
                raise ValueError(f"Duplicate column name: {column.name}")
            insert_index = min(index, len(self.columns))
            self.columns = OrderedDict(
                list(self.columns.items())[:insert_index]
                + [(column.name, column)]
                + list(self.columns.items())[insert_index:]
            )

    else:
        if isinstance(column, list):
            for col in column:
                if col.name in self.columns:
                    raise ValueError(f"Duplicate column name: {col.name}")
                self.columns[col.name] = col
        else:
            if column.name in self.columns:
                raise ValueError(f"Duplicate column name: {column.name}")
            self.columns[column.name] = column

delete_column

delete_column(name: str) -> None

Mark a column for deletion. Note that this does not delete the column from Synapse. You must call the .store() function on this table class instance to delete the column from Synapse. This is a convenience function to eliminate the need to manually delete the column from the dictionary and add it to the ._columns_to_delete attribute.

PARAMETER DESCRIPTION
name

The name of the column to delete.

TYPE: str

RETURNS DESCRIPTION
None

None

Deleting a column

This example shows how you may delete a column from a table and then store the change back in Synapse.

from synapseclient import Synapse
from synapseclient.models import Table

syn = Synapse()
syn.login()

table = Table(
    id="syn1234"
).get(include_columns=True)

table.delete_column(name="my_column")
table.store()
Deleting a column (async)

This example shows how you may delete a column from a table and then store the change back in Synapse.

import asyncio
from synapseclient import Synapse
from synapseclient.models import Table

syn = Synapse()
syn.login()

async def main():
    table = await Table(
        id="syn1234"
    ).get_async(include_columns=True)

    table.delete_column(name="my_column")
    table.store_async()

asyncio.run(main())
Source code in synapseclient/models/mixins/table_components.py
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
def delete_column(self, name: str) -> None:
    """
    Mark a column for deletion. Note that this does not delete the column from
    Synapse. You must call the `.store()` function on this table class instance to
    delete the column from Synapse. This is a convenience function to eliminate
    the need to manually delete the column from the dictionary and add it to the
    `._columns_to_delete` attribute.

    Arguments:
        name: The name of the column to delete.

    Returns:
        None

    Example: Deleting a column
        This example shows how you may delete a column from a table and then store
        the change back in Synapse.

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Table

        syn = Synapse()
        syn.login()

        table = Table(
            id="syn1234"
        ).get(include_columns=True)

        table.delete_column(name="my_column")
        table.store()
        ```

    Example: Deleting a column (async)
        This example shows how you may delete a column from a table and then store
        the change back in Synapse.

        ```python
        import asyncio
        from synapseclient import Synapse
        from synapseclient.models import Table

        syn = Synapse()
        syn.login()

        async def main():
            table = await Table(
                id="syn1234"
            ).get_async(include_columns=True)

            table.delete_column(name="my_column")
            table.store_async()

        asyncio.run(main())
        ```
    """
    if not self._last_persistent_instance:
        raise ValueError(
            "This method is only supported after interacting with Synapse via a `.get()` or `.store()` operation"
        )
    if not self.columns:
        raise ValueError(
            "There are no columns. Make sure you use the `include_columns` parameter in the `.get()` method."
        )

    column_to_delete = self.columns.get(name, None)
    if not column_to_delete:
        raise ValueError(f"Column with name {name} does not exist in the table.")

    self._columns_to_delete[column_to_delete.id] = column_to_delete
    self.columns.pop(column_to_delete.name, None)

reorder_column

reorder_column(name: str, index: int) -> None

Reorder a column in the table. Note that this does not store the column in Synapse. You must call the .store() function on this table class instance to store the column in Synapse. This is a convenience function to eliminate the need to manually reorder the .columns attribute dictionary.

You must ensure that the index is within the bounds of the number of columns in the table. If you pass in an index that is out of bounds the column will be added to the end of the list.

PARAMETER DESCRIPTION
name

The name of the column to reorder.

TYPE: str

index

The index to move the column to starting with 0.

TYPE: int

RETURNS DESCRIPTION
None

None

Reordering a column

This example shows how you may reorder a column in a table and then store the change back in Synapse.

from synapseclient import Synapse
from synapseclient.models import Column, ColumnType, Table

syn = Synapse()
syn.login()

table = Table(
    id="syn1234"
).get(include_columns=True)

# Move the column to the beginning of the list
table.reorder_column(name="my_column", index=0)
table.store()
Reordering a column (async)

This example shows how you may reorder a column in a table and then store the change back in Synapse.

import asyncio
from synapseclient import Synapse
from synapseclient.models import Column, ColumnType, Table

syn = Synapse()
syn.login()

async def main():
    table = await Table(
        id="syn1234"
    ).get_async(include_columns=True)

    # Move the column to the beginning of the list
    table.reorder_column(name="my_column", index=0)
    table.store_async()

asyncio.run(main())
Source code in synapseclient/models/mixins/table_components.py
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
def reorder_column(self, name: str, index: int) -> None:
    """Reorder a column in the table. Note that this does not store the column in
    Synapse. You must call the `.store()` function on this table class instance to
    store the column in Synapse. This is a convenience function to eliminate
    the need to manually reorder the `.columns` attribute dictionary.

    You must ensure that the index is within the bounds of the number of columns in
    the table. If you pass in an index that is out of bounds the column will be
    added to the end of the list.

    Arguments:
        name: The name of the column to reorder.
        index: The index to move the column to starting with 0.

    Returns:
        None

    Example: Reordering a column
        This example shows how you may reorder a column in a table and then store
        the change back in Synapse.

        ```python
        from synapseclient import Synapse
        from synapseclient.models import Column, ColumnType, Table

        syn = Synapse()
        syn.login()

        table = Table(
            id="syn1234"
        ).get(include_columns=True)

        # Move the column to the beginning of the list
        table.reorder_column(name="my_column", index=0)
        table.store()
        ```


    Example: Reordering a column (async)
        This example shows how you may reorder a column in a table and then store
        the change back in Synapse.

        ```python
        import asyncio
        from synapseclient import Synapse
        from synapseclient.models import Column, ColumnType, Table

        syn = Synapse()
        syn.login()

        async def main():
            table = await Table(
                id="syn1234"
            ).get_async(include_columns=True)

            # Move the column to the beginning of the list
            table.reorder_column(name="my_column", index=0)
            table.store_async()

        asyncio.run(main())
        ```
    """
    if not self._last_persistent_instance:
        raise ValueError(
            "This method is only supported after interacting with Synapse via a `.get()` or `.store()` operation"
        )

    column_to_reorder = self.columns.pop(name, None)
    if index >= len(self.columns):
        self.columns[name] = column_to_reorder
        return self

    self.columns = OrderedDict(
        list(self.columns.items())[:index]
        + [(name, column_to_reorder)]
        + list(self.columns.items())[index:]
    )

get_permissions

get_permissions(*, synapse_client: Optional[Synapse] = None) -> Permissions

Get the permissions that the caller has on an Entity.

PARAMETER DESCRIPTION
synapse_client

If not passed in and caching was not disabled by Synapse.allow_client_caching(False) this will use the last created instance from the Synapse class constructor.

TYPE: Optional[Synapse] DEFAULT: None

RETURNS DESCRIPTION
Permissions

A Permissions object

Using this function:

Getting permissions for a Synapse Entity

from synapseclient import Synapse
from synapseclient.models import File

syn = Synapse()
syn.login()

permissions = File(id="syn123").get_permissions()

Getting access types list from the Permissions object

permissions.access_types
Source code in synapseclient/models/protocols/access_control_protocol.py
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
def get_permissions(
    self,
    *,
    synapse_client: Optional[Synapse] = None,
) -> "Permissions":
    """
    Get the [permissions][synapseclient.core.models.permission.Permissions]
    that the caller has on an Entity.

    Arguments:
        synapse_client: If not passed in and caching was not disabled by
            `Synapse.allow_client_caching(False)` this will use the last created
            instance from the Synapse class constructor.

    Returns:
        A Permissions object


    Example: Using this function:
        Getting permissions for a Synapse Entity

        ```python
        from synapseclient import Synapse
        from synapseclient.models import File

        syn = Synapse()
        syn.login()

        permissions = File(id="syn123").get_permissions()
        ```

        Getting access types list from the Permissions object

        ```
        permissions.access_types
        ```
    """
    return self

get_acl

get_acl(principal_id: int = None, *, synapse_client: Optional[Synapse] = None) -> List[str]

Get the ACL that a user or group has on an Entity.

PARAMETER DESCRIPTION
principal_id

Identifier of a user or group (defaults to PUBLIC users)

TYPE: int DEFAULT: None

synapse_client

If not passed in and caching was not disabled by Synapse.allow_client_caching(False) this will use the last created instance from the Synapse class constructor.

TYPE: Optional[Synapse] DEFAULT: None

RETURNS DESCRIPTION
List[str]

An array containing some combination of ['READ', 'UPDATE', 'CREATE', 'DELETE', 'DOWNLOAD', 'MODERATE', 'CHANGE_PERMISSIONS', 'CHANGE_SETTINGS'] or an empty array

Source code in synapseclient/models/protocols/access_control_protocol.py
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
def get_acl(
    self, principal_id: int = None, *, synapse_client: Optional[Synapse] = None
) -> List[str]:
    """
    Get the [ACL][synapseclient.core.models.permission.Permissions.access_types]
    that a user or group has on an Entity.

    Arguments:
        principal_id: Identifier of a user or group (defaults to PUBLIC users)
        synapse_client: If not passed in and caching was not disabled by
            `Synapse.allow_client_caching(False)` this will use the last created
            instance from the Synapse class constructor.

    Returns:
        An array containing some combination of
            ['READ', 'UPDATE', 'CREATE', 'DELETE', 'DOWNLOAD', 'MODERATE',
            'CHANGE_PERMISSIONS', 'CHANGE_SETTINGS']
            or an empty array
    """
    return [""]

set_permissions

set_permissions(principal_id: int = None, access_type: List[str] = None, modify_benefactor: bool = False, warn_if_inherits: bool = True, overwrite: bool = True, *, synapse_client: Optional[Synapse] = None) -> Dict[str, Union[str, list]]

Sets permission that a user or group has on an Entity. An Entity may have its own ACL or inherit its ACL from a benefactor.

PARAMETER DESCRIPTION
principal_id

Identifier of a user or group. 273948 is for all registered Synapse users and 273949 is for public access. None implies public access.

TYPE: int DEFAULT: None

access_type

Type of permission to be granted. One or more of CREATE, READ, DOWNLOAD, UPDATE, DELETE, CHANGE_PERMISSIONS.

Defaults to ['READ', 'DOWNLOAD']

TYPE: List[str] DEFAULT: None

modify_benefactor

Set as True when modifying a benefactor's ACL. The term 'benefactor' is used to indicate which Entity an Entity inherits its ACL from. For example, a newly created Project will be its own benefactor, while a new FileEntity's benefactor will start off as its containing Project. If the entity already has local sharing settings the benefactor would be itself. It may also be the immediate parent, somewhere in the parent tree, or the project itself.

TYPE: bool DEFAULT: False

warn_if_inherits

When modify_benefactor is True, this does not have any effect. When modify_benefactor is False, and warn_if_inherits is True, a warning log message is produced if the benefactor for the entity you passed into the function is not itself, i.e., it's the parent folder, or another entity in the parent tree.

TYPE: bool DEFAULT: True

overwrite

By default this function overwrites existing permissions for the specified user. Set this flag to False to add new permissions non-destructively.

TYPE: bool DEFAULT: True

synapse_client

If not passed in and caching was not disabled by Synapse.allow_client_caching(False) this will use the last created instance from the Synapse class constructor.

TYPE: Optional[Synapse] DEFAULT: None

RETURNS DESCRIPTION
Dict[str, Union[str, list]]

An Access Control List object

Setting permissions

Grant all registered users download access

from synapseclient import Synapse
from synapseclient.models import File

syn = Synapse()
syn.login()

File(id="syn123").set_permissions(principal_id=273948, access_type=['READ','DOWNLOAD'])

Grant the public view access

from synapseclient import Synapse
from synapseclient.models import File

syn = Synapse()
syn.login()

File(id="syn123").set_permissions(principal_id=273949, access_type=['READ'])
Source code in synapseclient/models/protocols/access_control_protocol.py
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
def set_permissions(
    self,
    principal_id: int = None,
    access_type: List[str] = None,
    modify_benefactor: bool = False,
    warn_if_inherits: bool = True,
    overwrite: bool = True,
    *,
    synapse_client: Optional[Synapse] = None,
) -> Dict[str, Union[str, list]]:
    """
    Sets permission that a user or group has on an Entity.
    An Entity may have its own ACL or inherit its ACL from a benefactor.

    Arguments:
        principal_id: Identifier of a user or group. `273948` is for all
            registered Synapse users and `273949` is for public access.
            None implies public access.
        access_type: Type of permission to be granted. One or more of CREATE,
            READ, DOWNLOAD, UPDATE, DELETE, CHANGE_PERMISSIONS.

            **Defaults to ['READ', 'DOWNLOAD']**
        modify_benefactor: Set as True when modifying a benefactor's ACL. The term
            'benefactor' is used to indicate which Entity an Entity inherits its
            ACL from. For example, a newly created Project will be its own
            benefactor, while a new FileEntity's benefactor will start off as its
            containing Project. If the entity already has local sharing settings
            the benefactor would be itself. It may also be the immediate parent,
            somewhere in the parent tree, or the project itself.
        warn_if_inherits: When `modify_benefactor` is True, this does not have any
            effect. When `modify_benefactor` is False, and `warn_if_inherits` is
            True, a warning log message is produced if the benefactor for the
            entity you passed into the function is not itself, i.e., it's the
            parent folder, or another entity in the parent tree.
        overwrite: By default this function overwrites existing permissions for
            the specified user. Set this flag to False to add new permissions
            non-destructively.
        synapse_client: If not passed in and caching was not disabled by
            `Synapse.allow_client_caching(False)` this will use the last created
            instance from the Synapse class constructor.

    Returns:
        An Access Control List object

    Example: Setting permissions
        Grant all registered users download access

        ```python
        from synapseclient import Synapse
        from synapseclient.models import File

        syn = Synapse()
        syn.login()

        File(id="syn123").set_permissions(principal_id=273948, access_type=['READ','DOWNLOAD'])
        ```

        Grant the public view access

        ```python
        from synapseclient import Synapse
        from synapseclient.models import File

        syn = Synapse()
        syn.login()

        File(id="syn123").set_permissions(principal_id=273949, access_type=['READ'])
        ```
    """
    return {}

synapseclient.models.EntityRef dataclass

Represents a reference to the id and version of an entity to be used in Dataset and DatasetCollection objects.

ATTRIBUTE DESCRIPTION
id

The Synapse ID of the entity.

TYPE: str

version

Indicates a specific version of the entity.

TYPE: int

Source code in synapseclient/models/dataset.py
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
@dataclass
class EntityRef:
    """
    Represents a reference to the id and version of an entity to be used in `Dataset` and
    `DatasetCollection` objects.

    Attributes:
        id: The Synapse ID of the entity.
        version: Indicates a specific version of the entity.
    """

    id: str
    """The Synapse ID of the entity."""

    version: int
    """Indicates a specific version of the entity."""

    def to_synapse_request(self):
        """Converts the attributes of an EntityRef instance to a
        request expected of the Synapse REST API."""

        return {
            "entityId": self.id,
            "versionNumber": self.version,
        }

Attributes

id instance-attribute

id: str

The Synapse ID of the entity.

version instance-attribute

version: int

Indicates a specific version of the entity.

Functions

to_synapse_request

to_synapse_request()

Converts the attributes of an EntityRef instance to a request expected of the Synapse REST API.

Source code in synapseclient/models/dataset.py
54
55
56
57
58
59
60
61
def to_synapse_request(self):
    """Converts the attributes of an EntityRef instance to a
    request expected of the Synapse REST API."""

    return {
        "entityId": self.id,
        "versionNumber": self.version,
    }