First attempt
This commit is contained in:
commit
89677fbeee
22 changed files with 3461 additions and 0 deletions
287
voc/LICENSE
Normal file
287
voc/LICENSE
Normal file
|
|
@ -0,0 +1,287 @@
|
|||
EUROPEAN UNION PUBLIC LICENCE v. 1.2
|
||||
EUPL © the European Union 2007, 2016
|
||||
|
||||
This European Union Public Licence (the ‘EUPL’) applies to the Work (as defined
|
||||
below) which is provided under the terms of this Licence. Any use of the Work,
|
||||
other than as authorised under this Licence is prohibited (to the extent such
|
||||
use is covered by a right of the copyright holder of the Work).
|
||||
|
||||
The Work is provided under the terms of this Licence when the Licensor (as
|
||||
defined below) has placed the following notice immediately following the
|
||||
copyright notice for the Work:
|
||||
|
||||
Licensed under the EUPL
|
||||
|
||||
or has expressed by any other means his willingness to license under the EUPL.
|
||||
|
||||
1. Definitions
|
||||
|
||||
In this Licence, the following terms have the following meaning:
|
||||
|
||||
- ‘The Licence’: this Licence.
|
||||
|
||||
- ‘The Original Work’: the work or software distributed or communicated by the
|
||||
Licensor under this Licence, available as Source Code and also as Executable
|
||||
Code as the case may be.
|
||||
|
||||
- ‘Derivative Works’: the works or software that could be created by the
|
||||
Licensee, based upon the Original Work or modifications thereof. This Licence
|
||||
does not define the extent of modification or dependence on the Original Work
|
||||
required in order to classify a work as a Derivative Work; this extent is
|
||||
determined by copyright law applicable in the country mentioned in Article 15.
|
||||
|
||||
- ‘The Work’: the Original Work or its Derivative Works.
|
||||
|
||||
- ‘The Source Code’: the human-readable form of the Work which is the most
|
||||
convenient for people to study and modify.
|
||||
|
||||
- ‘The Executable Code’: any code which has generally been compiled and which is
|
||||
meant to be interpreted by a computer as a program.
|
||||
|
||||
- ‘The Licensor’: the natural or legal person that distributes or communicates
|
||||
the Work under the Licence.
|
||||
|
||||
- ‘Contributor(s)’: any natural or legal person who modifies the Work under the
|
||||
Licence, or otherwise contributes to the creation of a Derivative Work.
|
||||
|
||||
- ‘The Licensee’ or ‘You’: any natural or legal person who makes any usage of
|
||||
the Work under the terms of the Licence.
|
||||
|
||||
- ‘Distribution’ or ‘Communication’: any act of selling, giving, lending,
|
||||
renting, distributing, communicating, transmitting, or otherwise making
|
||||
available, online or offline, copies of the Work or providing access to its
|
||||
essential functionalities at the disposal of any other natural or legal
|
||||
person.
|
||||
|
||||
2. Scope of the rights granted by the Licence
|
||||
|
||||
The Licensor hereby grants You a worldwide, royalty-free, non-exclusive,
|
||||
sublicensable licence to do the following, for the duration of copyright vested
|
||||
in the Original Work:
|
||||
|
||||
- use the Work in any circumstance and for all usage,
|
||||
- reproduce the Work,
|
||||
- modify the Work, and make Derivative Works based upon the Work,
|
||||
- communicate to the public, including the right to make available or display
|
||||
the Work or copies thereof to the public and perform publicly, as the case may
|
||||
be, the Work,
|
||||
- distribute the Work or copies thereof,
|
||||
- lend and rent the Work or copies thereof,
|
||||
- sublicense rights in the Work or copies thereof.
|
||||
|
||||
Those rights can be exercised on any media, supports and formats, whether now
|
||||
known or later invented, as far as the applicable law permits so.
|
||||
|
||||
In the countries where moral rights apply, the Licensor waives his right to
|
||||
exercise his moral right to the extent allowed by law in order to make effective
|
||||
the licence of the economic rights here above listed.
|
||||
|
||||
The Licensor grants to the Licensee royalty-free, non-exclusive usage rights to
|
||||
any patents held by the Licensor, to the extent necessary to make use of the
|
||||
rights granted on the Work under this Licence.
|
||||
|
||||
3. Communication of the Source Code
|
||||
|
||||
The Licensor may provide the Work either in its Source Code form, or as
|
||||
Executable Code. If the Work is provided as Executable Code, the Licensor
|
||||
provides in addition a machine-readable copy of the Source Code of the Work
|
||||
along with each copy of the Work that the Licensor distributes or indicates, in
|
||||
a notice following the copyright notice attached to the Work, a repository where
|
||||
the Source Code is easily and freely accessible for as long as the Licensor
|
||||
continues to distribute or communicate the Work.
|
||||
|
||||
4. Limitations on copyright
|
||||
|
||||
Nothing in this Licence is intended to deprive the Licensee of the benefits from
|
||||
any exception or limitation to the exclusive rights of the rights owners in the
|
||||
Work, of the exhaustion of those rights or of other applicable limitations
|
||||
thereto.
|
||||
|
||||
5. Obligations of the Licensee
|
||||
|
||||
The grant of the rights mentioned above is subject to some restrictions and
|
||||
obligations imposed on the Licensee. Those obligations are the following:
|
||||
|
||||
Attribution right: The Licensee shall keep intact all copyright, patent or
|
||||
trademarks notices and all notices that refer to the Licence and to the
|
||||
disclaimer of warranties. The Licensee must include a copy of such notices and a
|
||||
copy of the Licence with every copy of the Work he/she distributes or
|
||||
communicates. The Licensee must cause any Derivative Work to carry prominent
|
||||
notices stating that the Work has been modified and the date of modification.
|
||||
|
||||
Copyleft clause: If the Licensee distributes or communicates copies of the
|
||||
Original Works or Derivative Works, this Distribution or Communication will be
|
||||
done under the terms of this Licence or of a later version of this Licence
|
||||
unless the Original Work is expressly distributed only under this version of the
|
||||
Licence — for example by communicating ‘EUPL v. 1.2 only’. The Licensee
|
||||
(becoming Licensor) cannot offer or impose any additional terms or conditions on
|
||||
the Work or Derivative Work that alter or restrict the terms of the Licence.
|
||||
|
||||
Compatibility clause: If the Licensee Distributes or Communicates Derivative
|
||||
Works or copies thereof based upon both the Work and another work licensed under
|
||||
a Compatible Licence, this Distribution or Communication can be done under the
|
||||
terms of this Compatible Licence. For the sake of this clause, ‘Compatible
|
||||
Licence’ refers to the licences listed in the appendix attached to this Licence.
|
||||
Should the Licensee's obligations under the Compatible Licence conflict with
|
||||
his/her obligations under this Licence, the obligations of the Compatible
|
||||
Licence shall prevail.
|
||||
|
||||
Provision of Source Code: When distributing or communicating copies of the Work,
|
||||
the Licensee will provide a machine-readable copy of the Source Code or indicate
|
||||
a repository where this Source will be easily and freely available for as long
|
||||
as the Licensee continues to distribute or communicate the Work.
|
||||
|
||||
Legal Protection: This Licence does not grant permission to use the trade names,
|
||||
trademarks, service marks, or names of the Licensor, except as required for
|
||||
reasonable and customary use in describing the origin of the Work and
|
||||
reproducing the content of the copyright notice.
|
||||
|
||||
6. Chain of Authorship
|
||||
|
||||
The original Licensor warrants that the copyright in the Original Work granted
|
||||
hereunder is owned by him/her or licensed to him/her and that he/she has the
|
||||
power and authority to grant the Licence.
|
||||
|
||||
Each Contributor warrants that the copyright in the modifications he/she brings
|
||||
to the Work are owned by him/her or licensed to him/her and that he/she has the
|
||||
power and authority to grant the Licence.
|
||||
|
||||
Each time You accept the Licence, the original Licensor and subsequent
|
||||
Contributors grant You a licence to their contributions to the Work, under the
|
||||
terms of this Licence.
|
||||
|
||||
7. Disclaimer of Warranty
|
||||
|
||||
The Work is a work in progress, which is continuously improved by numerous
|
||||
Contributors. It is not a finished work and may therefore contain defects or
|
||||
‘bugs’ inherent to this type of development.
|
||||
|
||||
For the above reason, the Work is provided under the Licence on an ‘as is’ basis
|
||||
and without warranties of any kind concerning the Work, including without
|
||||
limitation merchantability, fitness for a particular purpose, absence of defects
|
||||
or errors, accuracy, non-infringement of intellectual property rights other than
|
||||
copyright as stated in Article 6 of this Licence.
|
||||
|
||||
This disclaimer of warranty is an essential part of the Licence and a condition
|
||||
for the grant of any rights to the Work.
|
||||
|
||||
8. Disclaimer of Liability
|
||||
|
||||
Except in the cases of wilful misconduct or damages directly caused to natural
|
||||
persons, the Licensor will in no event be liable for any direct or indirect,
|
||||
material or moral, damages of any kind, arising out of the Licence or of the use
|
||||
of the Work, including without limitation, damages for loss of goodwill, work
|
||||
stoppage, computer failure or malfunction, loss of data or any commercial
|
||||
damage, even if the Licensor has been advised of the possibility of such damage.
|
||||
However, the Licensor will be liable under statutory product liability laws as
|
||||
far such laws apply to the Work.
|
||||
|
||||
9. Additional agreements
|
||||
|
||||
While distributing the Work, You may choose to conclude an additional agreement,
|
||||
defining obligations or services consistent with this Licence. However, if
|
||||
accepting obligations, You may act only on your own behalf and on your sole
|
||||
responsibility, not on behalf of the original Licensor or any other Contributor,
|
||||
and only if You agree to indemnify, defend, and hold each Contributor harmless
|
||||
for any liability incurred by, or claims asserted against such Contributor by
|
||||
the fact You have accepted any warranty or additional liability.
|
||||
|
||||
10. Acceptance of the Licence
|
||||
|
||||
The provisions of this Licence can be accepted by clicking on an icon ‘I agree’
|
||||
placed under the bottom of a window displaying the text of this Licence or by
|
||||
affirming consent in any other similar way, in accordance with the rules of
|
||||
applicable law. Clicking on that icon indicates your clear and irrevocable
|
||||
acceptance of this Licence and all of its terms and conditions.
|
||||
|
||||
Similarly, you irrevocably accept this Licence and all of its terms and
|
||||
conditions by exercising any rights granted to You by Article 2 of this Licence,
|
||||
such as the use of the Work, the creation by You of a Derivative Work or the
|
||||
Distribution or Communication by You of the Work or copies thereof.
|
||||
|
||||
11. Information to the public
|
||||
|
||||
In case of any Distribution or Communication of the Work by means of electronic
|
||||
communication by You (for example, by offering to download the Work from a
|
||||
remote location) the distribution channel or media (for example, a website) must
|
||||
at least provide to the public the information requested by the applicable law
|
||||
regarding the Licensor, the Licence and the way it may be accessible, concluded,
|
||||
stored and reproduced by the Licensee.
|
||||
|
||||
12. Termination of the Licence
|
||||
|
||||
The Licence and the rights granted hereunder will terminate automatically upon
|
||||
any breach by the Licensee of the terms of the Licence.
|
||||
|
||||
Such a termination will not terminate the licences of any person who has
|
||||
received the Work from the Licensee under the Licence, provided such persons
|
||||
remain in full compliance with the Licence.
|
||||
|
||||
13. Miscellaneous
|
||||
|
||||
Without prejudice of Article 9 above, the Licence represents the complete
|
||||
agreement between the Parties as to the Work.
|
||||
|
||||
If any provision of the Licence is invalid or unenforceable under applicable
|
||||
law, this will not affect the validity or enforceability of the Licence as a
|
||||
whole. Such provision will be construed or reformed so as necessary to make it
|
||||
valid and enforceable.
|
||||
|
||||
The European Commission may publish other linguistic versions or new versions of
|
||||
this Licence or updated versions of the Appendix, so far this is required and
|
||||
reasonable, without reducing the scope of the rights granted by the Licence. New
|
||||
versions of the Licence will be published with a unique version number.
|
||||
|
||||
All linguistic versions of this Licence, approved by the European Commission,
|
||||
have identical value. Parties can take advantage of the linguistic version of
|
||||
their choice.
|
||||
|
||||
14. Jurisdiction
|
||||
|
||||
Without prejudice to specific agreement between parties,
|
||||
|
||||
- any litigation resulting from the interpretation of this License, arising
|
||||
between the European Union institutions, bodies, offices or agencies, as a
|
||||
Licensor, and any Licensee, will be subject to the jurisdiction of the Court
|
||||
of Justice of the European Union, as laid down in article 272 of the Treaty on
|
||||
the Functioning of the European Union,
|
||||
|
||||
- any litigation arising between other parties and resulting from the
|
||||
interpretation of this License, will be subject to the exclusive jurisdiction
|
||||
of the competent court where the Licensor resides or conducts its primary
|
||||
business.
|
||||
|
||||
15. Applicable Law
|
||||
|
||||
Without prejudice to specific agreement between parties,
|
||||
|
||||
- this Licence shall be governed by the law of the European Union Member State
|
||||
where the Licensor has his seat, resides or has his registered office,
|
||||
|
||||
- this licence shall be governed by Belgian law if the Licensor has no seat,
|
||||
residence or registered office inside a European Union Member State.
|
||||
|
||||
Appendix
|
||||
|
||||
‘Compatible Licences’ according to Article 5 EUPL are:
|
||||
|
||||
- GNU General Public License (GPL) v. 2, v. 3
|
||||
- GNU Affero General Public License (AGPL) v. 3
|
||||
- Open Software License (OSL) v. 2.1, v. 3.0
|
||||
- Eclipse Public License (EPL) v. 1.0
|
||||
- CeCILL v. 2.0, v. 2.1
|
||||
- Mozilla Public Licence (MPL) v. 2
|
||||
- GNU Lesser General Public Licence (LGPL) v. 2.1, v. 3
|
||||
- Creative Commons Attribution-ShareAlike v. 3.0 Unported (CC BY-SA 3.0) for
|
||||
works other than software
|
||||
- European Union Public Licence (EUPL) v. 1.1, v. 1.2
|
||||
- Québec Free and Open-Source Licence — Reciprocity (LiLiQ-R) or Strong
|
||||
Reciprocity (LiLiQ-R+).
|
||||
|
||||
The European Commission may update this Appendix to later versions of the above
|
||||
licences without producing a new version of the EUPL, as long as they provide
|
||||
the rights granted in Article 2 of this Licence and protect the covered Source
|
||||
Code from exclusive appropriation.
|
||||
|
||||
All other changes or additions to this Appendix require the production of a new
|
||||
EUPL version.
|
||||
239
voc/README.md
Normal file
239
voc/README.md
Normal file
|
|
@ -0,0 +1,239 @@
|
|||
# C3VOC Schedule Tools
|
||||
|
||||
[](https://badge.fury.io/py/c3voc-schedule-tools)
|
||||
[](https://joinup.ec.europa.eu/collection/eupl/eupl-text-eupl-12)
|
||||
[](https://www.python.org/downloads/)
|
||||
|
||||
A Python library for generating, converting, and validating [schedule files](https://c3voc.de/wiki/schedule) for conferences and events.
|
||||
|
||||
Originally developed for the Chaos Computer Club events (C3), this library supports multiple schedule formats and conference management systems including [pretalx](https://github.com/pretalx/pretalx), [frab](https://frab.github.io/frab/).
|
||||
## Features
|
||||
|
||||
- **Integration**: Direct integration with pretalx, frab, and other conference planning systems
|
||||
- **Schedule Validation**: Built-in validation against schedule XML schema
|
||||
- **Flexible Data Sources**: Support for web APIs, local files, and custom data sources
|
||||
- **Multiple Converters**: Built-in converters for various data sources and formats
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install c3voc-schedule-tools
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Basic Schedule Creation
|
||||
|
||||
```python
|
||||
from voc import Schedule, Event, Room
|
||||
|
||||
# Create a new schedule
|
||||
schedule = Schedule.from_template(
|
||||
name="My Conference 2024",
|
||||
conference_title="My Conference",
|
||||
conference_acronym="MC24",
|
||||
start_day=25,
|
||||
days_count=3,
|
||||
timezone="Europe/Berlin"
|
||||
)
|
||||
|
||||
# Add rooms, generate your own global unique ids e.g. via `uuidgen`
|
||||
schedule.add_rooms([
|
||||
{"name": "Main Hall", "guid": "67D04C40-B35A-496A-A31C-C0F3FF63DAB7"},
|
||||
{"name": "Workshop Room", "guid": "5564FBA9-DBB5-4B6B-A0F0-CCF6C9F1EBD7"}
|
||||
])
|
||||
|
||||
# Add an event
|
||||
event = Event({
|
||||
"id": "event-1",
|
||||
"title": "Opening Keynote",
|
||||
"abstract": "Welcome to the conference",
|
||||
"date": "2024-12-25T10:00:00+01:00",
|
||||
"duration": "01:00",
|
||||
"room": "Main Hall",
|
||||
"track": "Keynotes",
|
||||
"type": "lecture",
|
||||
"language": "en",
|
||||
"persons": [{"public_name": "Jane Doe"}]
|
||||
})
|
||||
schedule.add_event(event)
|
||||
|
||||
# Export to JSON
|
||||
schedule.export('schedule.json')
|
||||
```
|
||||
|
||||
### Loading from Pretalx
|
||||
|
||||
```python
|
||||
from voc import PretalxConference, Schedule
|
||||
|
||||
# Load conference data from pretalx
|
||||
conference = PretalxConference(
|
||||
url="https://pretalx.example.com/event/my-conference/",
|
||||
data={"name": "My Conference"}
|
||||
)
|
||||
|
||||
# Get the schedule
|
||||
schedule = conference.schedule()
|
||||
|
||||
# Export to different formats
|
||||
schedule.export('schedule.json')
|
||||
schedule.export('schedule.xml')
|
||||
```
|
||||
|
||||
### Working with Existing Schedules
|
||||
|
||||
```python
|
||||
from voc import Schedule
|
||||
|
||||
# Load from URL
|
||||
schedule = Schedule.from_url("https://example.com/schedule.json")
|
||||
|
||||
# Load from file
|
||||
schedule = Schedule.from_file("schedule.json")
|
||||
|
||||
# Filter events by track
|
||||
track_events = schedule.events(filter=lambda e: e.get('track') == 'Security')
|
||||
|
||||
# Get all rooms
|
||||
rooms = schedule.rooms()
|
||||
|
||||
# Get events for a specific day
|
||||
day_1_events = schedule.day(1).events()
|
||||
```
|
||||
|
||||
|
||||
## API Reference
|
||||
|
||||
### Core Classes
|
||||
|
||||
#### Schedule
|
||||
|
||||
The main schedule container that holds conference metadata, days, rooms, and events.
|
||||
|
||||
**Key Methods:**
|
||||
|
||||
- `Schedule.from_url(url)` - Load schedule from URL
|
||||
- `Schedule.from_file(path)` - Load schedule from file
|
||||
- `Schedule.from_template(...)` - Create from template
|
||||
- `add_event(event)` - Add an event to the schedule
|
||||
- `add_rooms(rooms)` - Add rooms to the schedule
|
||||
- `export(filename)` - Export to file
|
||||
- `validate()` - Validate against XML schema
|
||||
|
||||
|
||||
#### Event
|
||||
|
||||
Represents a single conference event/talk.
|
||||
|
||||
**Properties:**
|
||||
|
||||
- `guid` - Global unique event identifier
|
||||
- `id` - Local event identifier, deprecated
|
||||
- `title` - Event title
|
||||
- `abstract` - Event description
|
||||
- `date` - Start date/time
|
||||
- `duration` - Event duration
|
||||
- `room` - Room name
|
||||
- `track` - Track/category
|
||||
- `persons` - List of speakers
|
||||
|
||||
|
||||
#### Room
|
||||
|
||||
Represents a conference room / lecture hall / etc.
|
||||
|
||||
**Properties:**
|
||||
|
||||
- `name` - Room name
|
||||
- `guid` - Global unique room identifier
|
||||
|
||||
|
||||
### Conference Planning Systems
|
||||
|
||||
#### PretalxConference
|
||||
|
||||
Integration with pretalx conference management system.
|
||||
|
||||
```python
|
||||
conference = PretalxConference(
|
||||
url="https://pretalx.example.com/event/",
|
||||
data={"name": "Conference Name"}
|
||||
)
|
||||
schedule = conference.schedule()
|
||||
```
|
||||
|
||||
#### GenericConference
|
||||
|
||||
Base class for generic conference data sources.
|
||||
|
||||
```python
|
||||
conference = GenericConference(
|
||||
url="https://example.com/schedule.json",
|
||||
data={"name": "Conference Name"}
|
||||
)
|
||||
```
|
||||
|
||||
#### WebcalConference
|
||||
|
||||
Import from iCal/webcal sources.
|
||||
|
||||
```python
|
||||
from voc import WebcalConference
|
||||
|
||||
conference = WebcalConference(url="https://example.com/events.ics")
|
||||
schedule = conference.schedule(template_schedule)
|
||||
```
|
||||
|
||||
|
||||
## Supported Formats
|
||||
|
||||
### Input Formats
|
||||
|
||||
- **JSON**: schedule.json format
|
||||
- **iCal**: RFC 5545 iCalendar format
|
||||
- **Pretalx API**: Direct API integration
|
||||
- **CSV**: Custom CSV formats (see examples in [parent folder](https://github.com/voc/schedule/blob/master/csv2schedule_deu.py))
|
||||
|
||||
### Output Formats
|
||||
|
||||
- **JSON**: C3VOC schedule.json
|
||||
- **XML**: CCC / Frab schedule XML aka [vnd.c3voc.schedule+xml](https://www.iana.org/assignments/media-types/application/vnd.c3voc.schedule+xml)
|
||||
- **iCal**: RFC 5545 format (TODO?)
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
- `PRETALX_TOKEN` - API token for pretalx integration
|
||||
- `C3DATA_API_URL` - C3data API endpoint
|
||||
- `C3DATA_TOKEN` - C3data authentication token
|
||||
|
||||
### Validation
|
||||
|
||||
The library includes built-in validation against the schedule XML schema:
|
||||
|
||||
```python
|
||||
# Validate a schedule
|
||||
try:
|
||||
schedule.validate()
|
||||
print("Schedule is valid")
|
||||
except ScheduleException as e:
|
||||
print(f"Validation error: {e}")
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
TBD, see parent folder
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under the EUPL-1.2 License - see the [LICENSE](LICENSE) file for details.
|
||||
|
||||
## Links
|
||||
|
||||
- [Documentation](https://c3voc.de/wiki/schedule)
|
||||
- [PyPI Package](https://pypi.org/project/c3voc-schedule-tools/)
|
||||
- [Source Code](https://github.com/voc/schedule)
|
||||
- [Issue Tracker](https://github.com/voc/schedule/issues)
|
||||
11
voc/__init__.py
Normal file
11
voc/__init__.py
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
# flake8: noqa
|
||||
|
||||
from .schedule import Schedule, ScheduleDay, ScheduleEncoder, ScheduleException
|
||||
from .event import Event
|
||||
from .room import Room
|
||||
from .generic import GenericConference
|
||||
from .pretalx import PretalxConference
|
||||
from .webcal import WebcalConference
|
||||
from .webcal2 import WebcalConference2
|
||||
|
||||
from .logger import Logger
|
||||
BIN
voc/__pycache__/__init__.cpython-312.pyc
Normal file
BIN
voc/__pycache__/__init__.cpython-312.pyc
Normal file
Binary file not shown.
BIN
voc/__pycache__/schedule.cpython-312.pyc
Normal file
BIN
voc/__pycache__/schedule.cpython-312.pyc
Normal file
Binary file not shown.
242
voc/c3data.py
Normal file
242
voc/c3data.py
Normal file
|
|
@ -0,0 +1,242 @@
|
|||
import argparse
|
||||
from os import getenv, path
|
||||
import json
|
||||
|
||||
from gql import Client, gql
|
||||
from gql.transport.aiohttp import AIOHTTPTransport
|
||||
from gql.transport.exceptions import TransportQueryError
|
||||
|
||||
try:
|
||||
from .schedule import Schedule
|
||||
from .event import Event
|
||||
from .room import Room
|
||||
from .tools import load_json, write
|
||||
from . import logger
|
||||
|
||||
|
||||
except ImportError:
|
||||
import sys
|
||||
sys.path.append('.')
|
||||
|
||||
from schedule import Schedule, Event
|
||||
from room import Room
|
||||
from tools import load_json, write
|
||||
from voc import logger
|
||||
|
||||
|
||||
|
||||
transport = AIOHTTPTransport(
|
||||
url=getenv('C3D_URL', 'https://data.c3voc.de/graphql'),
|
||||
headers={'Authorization': getenv('C3D_TOKEN', 'Basic|Bearer XXXX')}
|
||||
)
|
||||
# transport = AIOHTTPTransport(url="http://localhost:5001/graphql")
|
||||
|
||||
# Create a GraphQL client using the defined transport
|
||||
client = Client(transport=transport, fetch_schema_from_transport=True)
|
||||
|
||||
|
||||
def create_conference(schedule: Schedule):
|
||||
conference = schedule.conference()
|
||||
data = {
|
||||
'conference': {
|
||||
'acronym': conference['acronym'],
|
||||
'title': conference['title'],
|
||||
'startDate': conference['start'],
|
||||
'endDate': conference['end'],
|
||||
'daysUsingId': {
|
||||
'create': [{
|
||||
'index': day['index'],
|
||||
'startDate': day['day_start'],
|
||||
'endDate': day['day_end']
|
||||
} for day in schedule.days()]
|
||||
},
|
||||
'roomsUsingId': {
|
||||
'create': [room.graphql() for room in schedule.rooms(mode='obj')]
|
||||
}
|
||||
}
|
||||
}
|
||||
# print(json.dumps(data, indent=2))
|
||||
|
||||
try:
|
||||
result = client.execute(gql('''
|
||||
mutation createConferenceAndDaysAndRooms($input: CreateConferenceInput!) {
|
||||
createConference(input: $input) {
|
||||
conference {
|
||||
id
|
||||
rooms {
|
||||
nodes {
|
||||
guid
|
||||
slug
|
||||
name
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
'''), variable_values={'input': data})
|
||||
return result['createConference']
|
||||
|
||||
except TransportQueryError as e:
|
||||
# raise exception, error is not 'conference already exists'
|
||||
if 'duplicate key value violates unique constraint "conferences_acronym_key"' != e.errors[0]['message']:
|
||||
raise e
|
||||
|
||||
# conference already exists, so try to get required infos
|
||||
result = get_conference(conference['acronym'])
|
||||
return result
|
||||
|
||||
|
||||
def get_conference(acronym):
|
||||
return client.execute(gql('''
|
||||
query getConferenceAndRooms($acronym: String!) {
|
||||
conference: conferenceByAcronym(acronym: $acronym) {
|
||||
id
|
||||
rooms {
|
||||
nodes {
|
||||
guid
|
||||
name
|
||||
}
|
||||
}
|
||||
}
|
||||
}'''), variable_values={'acronym': acronym})
|
||||
|
||||
|
||||
def add_room(conference_id, room: Room):
|
||||
result = client.execute(gql('''
|
||||
mutation addRoom($input: UpsertRoomInput!) {
|
||||
upsertRoom(input: $input) {
|
||||
room { guid, name, slug, meta }
|
||||
}
|
||||
}'''), {'input': {'room': {
|
||||
**room.graphql(),
|
||||
'conferenceId': conference_id
|
||||
}
|
||||
}})
|
||||
|
||||
print(result)
|
||||
return result['upsertRoom']['room']['guid']
|
||||
|
||||
|
||||
def add_event(conference_id, room_id, event: Event):
|
||||
data = {
|
||||
"event": {
|
||||
**(event.graphql()),
|
||||
"conferenceId": conference_id,
|
||||
"roomId": room_id,
|
||||
"eventPeopleUsingGuid": {
|
||||
"create": [
|
||||
# TODO: add person guid
|
||||
{"personId": str(p['id']), "publicName": p.get('name') or p.get('public_name')} for p in event['persons']
|
||||
]}
|
||||
}
|
||||
}
|
||||
|
||||
query = gql('''
|
||||
mutation upsertEvent($input: UpsertEventInput!) {
|
||||
upsertEvent(input: $input) {
|
||||
clientMutationId
|
||||
}
|
||||
}
|
||||
''')
|
||||
|
||||
try:
|
||||
write('.')
|
||||
client.execute(query, {'input': data})
|
||||
except Exception as e:
|
||||
print(json.dumps(data, indent=2))
|
||||
print()
|
||||
print(e)
|
||||
print()
|
||||
|
||||
|
||||
def remove_event(event_guid):
|
||||
try:
|
||||
client.execute(gql('''
|
||||
mutation deleteEvent($guid: UUID!) {
|
||||
deleteEvent(input: {guid: $guid}) { deletedEventNodeId }
|
||||
}
|
||||
'''), {'guid': event_guid})
|
||||
except Exception as e:
|
||||
print(e)
|
||||
print()
|
||||
|
||||
|
||||
class C3data:
|
||||
conference_id = None
|
||||
room_ids = {}
|
||||
|
||||
def __init__(self, schedule: Schedule, create=False):
|
||||
result = create_conference(schedule) if create else get_conference(schedule.conference('acronym'))
|
||||
if "errors" in result:
|
||||
logger.error(result['errors'])
|
||||
if not result['conference']:
|
||||
raise "Please create conference in target system using --create"
|
||||
self.conference_id = result['conference']['id']
|
||||
|
||||
self.room_ids = {x['name']: x['guid'] for x in result['conference']['rooms']['nodes']}
|
||||
|
||||
# check for new/updated rooms
|
||||
for room in schedule.rooms(mode='obj'):
|
||||
if room.name not in self.room_ids:
|
||||
room_id = add_room(self.conference_id, room)
|
||||
self.room_ids[room.name] = room_id
|
||||
|
||||
# TODO check for new rooms and create them now
|
||||
|
||||
def upsert_event(self, event: Event):
|
||||
if event['room'] in self.room_ids:
|
||||
room_id = self.room_ids[event['room']]
|
||||
else:
|
||||
print('WARNING: Room {} does not exist, creating.'.format(event['room']))
|
||||
room_id = add_room(self.conference_id, Room(name=event['room'], guid=event.get('room_id')))
|
||||
self.room_ids[event['room']] = room_id
|
||||
add_event(self.conference_id, room_id, event)
|
||||
|
||||
def depublish_event(self, event_guid):
|
||||
remove_event(event_guid)
|
||||
|
||||
def process_changed_events(self, repo: 'Repo', options):
|
||||
changed_items = repo.index.diff('HEAD~1', 'events')
|
||||
for i in changed_items:
|
||||
write(i.change_type + ': ')
|
||||
try:
|
||||
if i.change_type == 'D':
|
||||
event_guid = path.splitext(path.basename(i.a_path))[0]
|
||||
self.depublish_event(event_guid)
|
||||
else:
|
||||
event = Event(load_json(i.a_path))
|
||||
self.upsert_event(event)
|
||||
except KeyboardInterrupt:
|
||||
break
|
||||
except Exception as e:
|
||||
print(e)
|
||||
if options.exit_when_exception_occours:
|
||||
raise e
|
||||
|
||||
|
||||
def upsert_schedule(schedule: Schedule, create=False):
|
||||
c3data = C3data(schedule, create)
|
||||
try:
|
||||
schedule.foreach_event(c3data.upsert_event)
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
|
||||
|
||||
def test():
|
||||
# schedule = Schedule.from_url('https://fahrplan.events.ccc.de/camp/2019/Fahrplan/schedule.json')
|
||||
schedule = Schedule.from_file('38C3/everything.schedule.json')
|
||||
|
||||
upsert_schedule(schedule, create=True)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument('url', action="store", help="url or local path source schedule.json")
|
||||
parser.add_argument('--create', action="store_true", default=False)
|
||||
args = parser.parse_args()
|
||||
|
||||
schedule = Schedule.from_url(args.url) if args.url.startswith('http') else Schedule.from_file(args.url)
|
||||
upsert_schedule(schedule, create=args.create)
|
||||
|
||||
print('')
|
||||
print('done')
|
||||
163
voc/event.py
Normal file
163
voc/event.py
Normal file
|
|
@ -0,0 +1,163 @@
|
|||
import re
|
||||
import json
|
||||
import collections
|
||||
from collections import OrderedDict
|
||||
import dateutil.parser
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
from voc.tools import str2timedelta
|
||||
|
||||
class EventSourceInterface:
|
||||
origin_system = None
|
||||
|
||||
|
||||
class Schedule(EventSourceInterface):
|
||||
pass
|
||||
|
||||
|
||||
class Event(collections.abc.Mapping):
|
||||
_event = None
|
||||
origin: EventSourceInterface = None
|
||||
start: datetime = None
|
||||
duration: timedelta = None
|
||||
|
||||
def __init__(self, data, start_time: datetime = None, origin: EventSourceInterface = None):
|
||||
# when being restored from single event file, we have to specially process the origin attribute
|
||||
if 'origin' in data:
|
||||
self.origin = EventSourceInterface()
|
||||
self.origin.origin_system = data['origin']
|
||||
del data['origin']
|
||||
|
||||
# remove empty optional fields – and url... Does anybody remember why `url`, too?
|
||||
for field in ["video_download_url", "answers", "url"]:
|
||||
if field in data and not (data[field]):
|
||||
del data[field]
|
||||
|
||||
assert 'id' in data or data.get('guid'), "guid (or id) is required"
|
||||
assert 'title' in data
|
||||
assert 'date' in data
|
||||
|
||||
self.start = start_time or dateutil.parser.parse(data["date"])
|
||||
self.duration = str2timedelta(data["duration"])
|
||||
|
||||
if 'start' not in data:
|
||||
data['start'] = self.start.strftime('%H:%M')
|
||||
|
||||
# empty description for pretalx importer (temporary workaround)
|
||||
if 'description' not in data:
|
||||
data['description'] = ''
|
||||
|
||||
self._event = OrderedDict(data)
|
||||
|
||||
# generate id from guid, when not set so old apps can still process this event
|
||||
if 'id' not in data and 'guid' in data:
|
||||
from voc.tools import get_id
|
||||
self._event['id'] = get_id(self['guid'], length=4)
|
||||
self.origin = origin
|
||||
|
||||
@property
|
||||
def end(self):
|
||||
return self.start + self.duration
|
||||
|
||||
def __getitem__(self, key):
|
||||
return self._event.get(key)
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
self._event[key] = value
|
||||
|
||||
def __iter__(self):
|
||||
return self._event.__iter__()
|
||||
|
||||
def __len__(self):
|
||||
return len(self._event)
|
||||
|
||||
def items(self):
|
||||
return self._event.items()
|
||||
|
||||
def persons(self):
|
||||
return [p.get("name", p.get("public_name")) for p in self._event["persons"]]
|
||||
|
||||
def json(self):
|
||||
return self._event
|
||||
|
||||
def graphql(self):
|
||||
r = dict(
|
||||
(re.sub(r"_([a-z])", lambda m: (m.group(1).upper()), k), v)
|
||||
for k, v in self._event.items()
|
||||
)
|
||||
r["localId"] = self._event["id"]
|
||||
del r["id"]
|
||||
r["eventType"] = self._event["type"]
|
||||
del r["type"]
|
||||
del r["room"]
|
||||
del r["start"]
|
||||
r["startDate"] = self._event["date"]
|
||||
del r["date"]
|
||||
duration = self._event["duration"].split(":")
|
||||
r["duration"] = {"hours": int(duration[0]), "minutes": int(duration[1])}
|
||||
del r["persons"]
|
||||
if "recording" in r:
|
||||
if r["recording"].get("optout") is True:
|
||||
r["do_not_record"] = True
|
||||
del r["recording"]
|
||||
if "videoDownloadUrl" in r:
|
||||
del r["videoDownloadUrl"]
|
||||
if "answers" in r:
|
||||
del r["answers"]
|
||||
# fix wrong formatted links
|
||||
if "links" in r and len(r["links"]) > 0 and isinstance(r["links"][0], str):
|
||||
r["links"] = [{"url": url, "title": url} for url in r["links"]]
|
||||
return r
|
||||
|
||||
def voctoimport(self):
|
||||
r = dict(self._event.items())
|
||||
r["talkid"] = self._event["id"]
|
||||
del r["id"]
|
||||
del r["type"]
|
||||
del r["start"]
|
||||
del r["persons"]
|
||||
del r["logo"]
|
||||
del r["subtitle"]
|
||||
if "recording_license" in r:
|
||||
del r["recording_license"]
|
||||
if "recording" in r:
|
||||
del r["recording"]
|
||||
if "do_not_record" in r:
|
||||
del r["do_not_record"]
|
||||
if "video_download_url" in r:
|
||||
del r["video_download_url"]
|
||||
if "answers" in r:
|
||||
del r["answers"]
|
||||
if "links" in r:
|
||||
del r["links"]
|
||||
if "attachments" in r:
|
||||
del r["attachments"]
|
||||
return r
|
||||
|
||||
# export all attributes which are not part of rC3 core event model
|
||||
def meta(self):
|
||||
r = OrderedDict(self._event.items())
|
||||
# r['local_id'] = self._event['id']
|
||||
# del r["id"]
|
||||
del r["guid"]
|
||||
del r["slug"]
|
||||
del r["room"]
|
||||
del r["start"]
|
||||
del r["date"]
|
||||
del r["duration"]
|
||||
del r["track_id"]
|
||||
del r["track"]
|
||||
# del r['persons']
|
||||
# if 'answers' in r:
|
||||
# del r['answers']
|
||||
# fix wrong formatted links
|
||||
if len(r["links"]) > 0 and isinstance(r["links"][0], str):
|
||||
r["links"] = [{"url": url, "title": url} for url in r["links"]]
|
||||
return r
|
||||
|
||||
def __str__(self):
|
||||
return json.dumps(self._event, indent=2)
|
||||
|
||||
def export(self, prefix, suffix=""):
|
||||
with open("{}{}{}.json".format(prefix, self._event["guid"], suffix), "w") as fp:
|
||||
json.dump(self._event, fp, indent=2)
|
||||
25
voc/generic.py
Normal file
25
voc/generic.py
Normal file
|
|
@ -0,0 +1,25 @@
|
|||
from voc.event import EventSourceInterface
|
||||
from .schedule import Schedule, ScheduleException
|
||||
from urllib.parse import urlparse
|
||||
|
||||
|
||||
class GenericConference(dict, EventSourceInterface):
|
||||
schedule_url = None
|
||||
options = {}
|
||||
timeout = 10
|
||||
|
||||
def __init__(self, url, data, options={}):
|
||||
self.origin_system = urlparse(url).netloc
|
||||
self.schedule_url = url
|
||||
self.options = options
|
||||
self['url'] = url
|
||||
dict.__init__(self, data)
|
||||
|
||||
def __str__(self):
|
||||
return self['name']
|
||||
|
||||
def schedule(self, *args) -> Schedule:
|
||||
if not self.schedule_url or self.schedule_url == 'TBD':
|
||||
raise ScheduleException(' has no schedule url yet – ignoring')
|
||||
|
||||
return Schedule.from_url(self.schedule_url, self.timeout)
|
||||
52
voc/git.py
Normal file
52
voc/git.py
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
from git import Repo
|
||||
from voc.c3data import C3data
|
||||
from voc.event import Event
|
||||
from voc.schedule import Schedule, ScheduleEncoder
|
||||
from voc.tools import (
|
||||
commit_changes_if_something_relevant_changed,
|
||||
git,
|
||||
)
|
||||
|
||||
def export_event_files(schedule: Schedule, options: argparse.Namespace, local = False):
|
||||
# to get proper a state, we first have to remove all event files from the previous run
|
||||
if not local or options.git:
|
||||
git("rm events/* >/dev/null")
|
||||
os.makedirs('events', exist_ok=True)
|
||||
|
||||
# write separate file for each event, to get better git diffs
|
||||
# TODO: use Event.export()
|
||||
def export_event(event: Event):
|
||||
origin_system = None
|
||||
if isinstance(event, Event) and event.origin:
|
||||
origin_system = event.origin.origin_system
|
||||
|
||||
with open("events/{}.json".format(event["guid"]), "w") as fp:
|
||||
json.dump(
|
||||
{
|
||||
**event,
|
||||
"room_id": schedule._room_ids.get(event["room"], None),
|
||||
"origin": origin_system or None,
|
||||
},
|
||||
fp,
|
||||
indent=2,
|
||||
cls=ScheduleEncoder,
|
||||
)
|
||||
|
||||
schedule.foreach_event(export_event)
|
||||
|
||||
|
||||
def postprocessing(schedule: Schedule, options: argparse.Namespace, local = False, targets = []):
|
||||
if not local or options.git:
|
||||
commit_changes_if_something_relevant_changed(schedule)
|
||||
# Attention: This method exits the script, if nothing relevant changed
|
||||
# TODO: make this fact more obvious or refactor code
|
||||
|
||||
if not local and "c3data" in targets:
|
||||
print("\n== Updating c3data via API…")
|
||||
|
||||
c3data = C3data(schedule)
|
||||
c3data.process_changed_events(Repo('.'), options)
|
||||
50
voc/logger.py
Normal file
50
voc/logger.py
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
import logging
|
||||
from logging import info, debug, warn, error, critical # noqa
|
||||
import argparse
|
||||
|
||||
__all__ = [info, debug, warn, error, critical]
|
||||
|
||||
class Logger(logging.Logger):
|
||||
def __init__(self, name, args=None, level='INFO'):
|
||||
logging.Logger.__init__(self, name, level)
|
||||
# log = logging.getLogger(name)
|
||||
|
||||
if False and args is None:
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument('--quiet', action='store_true')
|
||||
parser.add_argument('--debug', action='store_true')
|
||||
parser.add_argument('--verbose', '-v', action='store_true')
|
||||
args = parser.parse_args()
|
||||
|
||||
if args:
|
||||
configure_logging(args)
|
||||
|
||||
|
||||
|
||||
|
||||
def configure_logging(args):
|
||||
verbosity = (args.verbose or args.debug or 0) - (args.quiet or 0)
|
||||
if verbosity <= -2:
|
||||
level = logging.CRITICAL
|
||||
elif verbosity == -1:
|
||||
level = logging.ERROR
|
||||
elif verbosity == 0:
|
||||
level = logging.WARNING
|
||||
elif verbosity == 1:
|
||||
level = logging.INFO
|
||||
elif verbosity >= 2:
|
||||
level = logging.DEBUG
|
||||
|
||||
# fancy colors
|
||||
logging.addLevelName(logging.CRITICAL, '\033[1;41m%s\033[1;0m' % logging.getLevelName(logging.CRITICAL))
|
||||
logging.addLevelName(logging.ERROR, '\033[1;31m%s\033[1;0m' % logging.getLevelName(logging.ERROR))
|
||||
logging.addLevelName(logging.WARNING, '\033[1;33m%s\033[1;0m' % logging.getLevelName(logging.WARNING))
|
||||
logging.addLevelName(logging.INFO, '\033[1;32m%s\033[1;0m' % logging.getLevelName(logging.INFO))
|
||||
logging.addLevelName(logging.DEBUG, '\033[1;34m%s\033[1;0m' % logging.getLevelName(logging.DEBUG))
|
||||
|
||||
if args.debug:
|
||||
log_format = '%(asctime)s - %(name)s - %(levelname)s {%(filename)s:%(lineno)d} %(message)s'
|
||||
else:
|
||||
log_format = '%(asctime)s - %(levelname)s - %(message)s'
|
||||
|
||||
#logging.basicConfig(filename=args.logfile, level=level, format=log_format)
|
||||
50
voc/pretalx.py
Normal file
50
voc/pretalx.py
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
from os import path, getenv
|
||||
import requests
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from voc import GenericConference, logger
|
||||
|
||||
headers = {'Authorization': 'Token ' + getenv('PRETALX_TOKEN', ''), 'Content-Type': 'application/json'}
|
||||
|
||||
|
||||
class PretalxConference(GenericConference):
|
||||
slug = None
|
||||
api_url = None
|
||||
|
||||
def __init__(self, url, data, options={}):
|
||||
GenericConference.__init__(self, url, data, options)
|
||||
|
||||
if url and url != 'TBD':
|
||||
self.schedule_url = path.join(url, "schedule/export/schedule.json")
|
||||
r = urlparse(url)
|
||||
self.slug = data.get('slug', path.basename(r.path))
|
||||
|
||||
# /api/events/hip-berlin-2022
|
||||
self.api_url = path.join(f"{r.scheme}://{r.netloc}{path.dirname(r.path)}", "api/events", self.slug)
|
||||
|
||||
try:
|
||||
# load additional metadata via pretalx REST API
|
||||
self['meta'] = self.meta()
|
||||
self['rooms'] = self.rooms()
|
||||
except Exception as e:
|
||||
logger.warn(e)
|
||||
pass
|
||||
|
||||
def meta(self):
|
||||
return requests.get(self.api_url, timeout=self.timeout) \
|
||||
.json()
|
||||
|
||||
def rooms(self):
|
||||
return requests.get(self.api_url + '/rooms', timeout=self.timeout, headers=headers if self.origin_system == 'pretalx.c3voc.de' else {'Content-Type': 'application/json'}) \
|
||||
.json() \
|
||||
.get('results')
|
||||
|
||||
def latest_schedule(self):
|
||||
return requests.get(self.api_url + '/schedules/latest/', timeout=self.timeout) \
|
||||
.json()
|
||||
# Custom pretalx schedule format
|
||||
|
||||
# def tracks(self):
|
||||
# return requests.get(self.api_url + '/tracks', timeout=1, headers=headers) if self.origin_system == 'pretalx.c3voc.de' else {} \
|
||||
# .json() \
|
||||
# .get('results')
|
||||
130
voc/rc3hub.py
Normal file
130
voc/rc3hub.py
Normal file
|
|
@ -0,0 +1,130 @@
|
|||
from os import getenv
|
||||
import json
|
||||
import requests
|
||||
|
||||
try:
|
||||
from .schedule import Schedule
|
||||
except ImportError:
|
||||
from schedule import Schedule
|
||||
|
||||
url = getenv('HUB_URL', 'https://api-test.rc3.cccv.de/api/c/rc3/')
|
||||
conference_id = "17391cf3-fc95-4294-bc34-b8371c6d89b3" # rc3 test
|
||||
|
||||
headers = {
|
||||
'Authorization': 'Token ' + getenv('HUB_TOKEN', 'XXXX'),
|
||||
'Accept': 'application/json'
|
||||
}
|
||||
|
||||
|
||||
def get(path):
|
||||
print('GET ' + url + path)
|
||||
r = requests.get(url + path, headers=headers)
|
||||
print(r.status_code)
|
||||
return r.json()
|
||||
|
||||
|
||||
def post_event(event):
|
||||
print('POST {}event/{}/schedule'.format(url, event['guid']))
|
||||
r = requests.post(
|
||||
'{}event/{}/schedule'.format(url, event['guid']),
|
||||
json=event,
|
||||
headers=headers
|
||||
)
|
||||
print(r.status_code)
|
||||
|
||||
if r.status_code != 201:
|
||||
print(json.dumps(event, indent=2))
|
||||
raise Exception(r.json()['error'])
|
||||
return r
|
||||
|
||||
|
||||
def upsert_event(event):
|
||||
if event['track']:
|
||||
if not(event['track'] in tracks):
|
||||
print('WARNING: Track {} does not exist'.format(event['track']))
|
||||
event['track'] = None
|
||||
|
||||
# Workaround for bug in hub: remove empty room_id from dict
|
||||
if 'room_id' in event and not(event['room_id']) and 'room' in event:
|
||||
del event['room_id']
|
||||
|
||||
post_event(event)
|
||||
|
||||
|
||||
def depublish_event(event_guid):
|
||||
post_event({
|
||||
'guid': event_guid.event,
|
||||
'public': False
|
||||
})
|
||||
|
||||
|
||||
skip = False
|
||||
tracks = []
|
||||
|
||||
|
||||
def init(channels):
|
||||
global tracks
|
||||
|
||||
tracks = {x['name']: x['id'] for x in get('tracks')}
|
||||
|
||||
|
||||
def push_schedule(schedule):
|
||||
channel_room_ids = {x['schedule_room']: x['room_guid'] for x in channels}
|
||||
rooms = get('rooms')
|
||||
room_ids = {x['name']: x['id'] for x in rooms}
|
||||
hub_room_names = {x['id']: x['name'] for x in rooms}
|
||||
|
||||
print(tracks)
|
||||
|
||||
def process(event):
|
||||
global skip
|
||||
if skip:
|
||||
if event['guid'] == skip:
|
||||
skip = False
|
||||
return
|
||||
|
||||
try:
|
||||
if event['room'] in channel_room_ids:
|
||||
event['room_id'] = channel_room_ids.get(event['room'])
|
||||
del event['room']
|
||||
elif not(event['room'] in room_ids):
|
||||
if event['room'] in channel_room_ids:
|
||||
try:
|
||||
event['room'] = hub_room_names[channel_room_ids[event['room']]]
|
||||
except Exception as e:
|
||||
print(json.dumps(event, indent=2))
|
||||
print(e.message)
|
||||
else:
|
||||
print('ERROR: Room {} does not exist'.format(event['room']))
|
||||
return
|
||||
upsert_event(event)
|
||||
|
||||
except Exception as e:
|
||||
print(json.dumps(event, indent=2))
|
||||
print(event['guid'])
|
||||
print(e)
|
||||
|
||||
schedule.foreach_event(process)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import optparse
|
||||
parser = optparse.OptionParser()
|
||||
# , doc="Skips all events till event with guid X is found.")
|
||||
parser.add_option('--skip', action="store", dest="skip", default=False)
|
||||
|
||||
options, args = parser.parse_args()
|
||||
skip = options.skip
|
||||
|
||||
channels = requests \
|
||||
.get('https://c3voc.de/wiki/lib/exe/graphql2.php?query={channels{nodes{schedule_room,room_guid}}}') \
|
||||
.json()['data']['channels']['nodes']
|
||||
|
||||
init(channels)
|
||||
|
||||
schedule = Schedule.from_url('https://data.c3voc.de/rC3/everything.schedule.json')
|
||||
# schedule = Schedule.from_url('https://data.c3voc.de/rC3/channels.schedule.json')
|
||||
# schedule = Schedule.from_file('rC3/channels.schedule.json')
|
||||
|
||||
push_schedule(schedule)
|
||||
print('done')
|
||||
45
voc/room.py
Normal file
45
voc/room.py
Normal file
|
|
@ -0,0 +1,45 @@
|
|||
from dataclasses import dataclass, fields
|
||||
|
||||
try:
|
||||
from voc.event import Schedule
|
||||
from voc.tools import gen_uuid, normalise_string
|
||||
except:
|
||||
from event import Schedule
|
||||
from tools import gen_uuid, normalise_string
|
||||
|
||||
@dataclass
|
||||
class Room:
|
||||
guid: str = None
|
||||
name: str = None
|
||||
stream: str = None
|
||||
description: str = None
|
||||
capacity: int = None
|
||||
location: str = None
|
||||
|
||||
_parent: Schedule = None
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, data: dict):
|
||||
assert isinstance(data, dict), 'Data must be a dictionary.'
|
||||
|
||||
fieldSet = {f.name for f in fields(cls) if f.init}
|
||||
filteredData = {k: v for k, v in data.items() if k in fieldSet}
|
||||
|
||||
return cls(**filteredData)
|
||||
|
||||
def graphql(self):
|
||||
return {
|
||||
'name': self.name,
|
||||
'guid': self.guid or gen_uuid(self.name),
|
||||
'description': self.description,
|
||||
# 'stream_id': room.stream,
|
||||
'slug': normalise_string(self.name.lower()),
|
||||
'meta': {'location': self.location},
|
||||
}
|
||||
|
||||
# @name.setter
|
||||
def update_name(self, new_name: str, update_parent=True):
|
||||
if self._parent and update_parent:
|
||||
self._parent.rename_rooms({self.name: new_name})
|
||||
|
||||
self.name = new_name
|
||||
855
voc/schedule.py
Normal file
855
voc/schedule.py
Normal file
|
|
@ -0,0 +1,855 @@
|
|||
import sys
|
||||
import os
|
||||
import re
|
||||
import json
|
||||
import copy
|
||||
import requests
|
||||
import pytz
|
||||
import dateutil.parser
|
||||
from collections import OrderedDict
|
||||
from typing import Callable, Dict, List, Union
|
||||
from datetime import datetime, timedelta
|
||||
from urllib.parse import urlparse
|
||||
from lxml import etree as ET
|
||||
|
||||
try:
|
||||
import voc.tools as tools
|
||||
from voc.event import Event, EventSourceInterface
|
||||
from voc.room import Room
|
||||
from voc.logger import Logger
|
||||
except ImportError:
|
||||
import tools
|
||||
from event import Event, EventSourceInterface
|
||||
from room import Room
|
||||
from logger import Logger
|
||||
|
||||
|
||||
log = Logger(__name__)
|
||||
|
||||
# validator = f"{sys.path[0]}/validator/xsd/validate_schedule_xml.sh"
|
||||
validator = f"xmllint --noout --schema {sys.path[0]}/validator/xsd/schedule.xml.xsd"
|
||||
# validator = f"xmllint --noout --schema {sys.path[0]}/validator/xsd/schedule-without-person.xml.xsd"
|
||||
validator_filter = ""
|
||||
|
||||
|
||||
def set_validator_filter(filter):
|
||||
global validator_filter
|
||||
validator_filter += " | awk '" + " && ".join(["!/" + x + "/" for x in filter]) + "'"
|
||||
|
||||
|
||||
class ScheduleException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class ScheduleDay(dict):
|
||||
start: datetime = None
|
||||
end: datetime = None
|
||||
|
||||
def __init__(
|
||||
self, i=None, year=None, month=12, day=None, tz=None, dt=None, json=None
|
||||
):
|
||||
if i is not None and dt:
|
||||
self.start = dt
|
||||
self.end = dt + timedelta(hours=23) # conference day lasts 23 hours
|
||||
|
||||
dict.__init__(self, {
|
||||
"index": i + 1,
|
||||
"date": dt.strftime("%Y-%m-%d"),
|
||||
"day_start": self.start.isoformat(),
|
||||
"day_end": self.start.isoformat(),
|
||||
"rooms": {},
|
||||
})
|
||||
return
|
||||
elif json:
|
||||
dict.__init__(self, json)
|
||||
elif i is not None and (day or (year and day)):
|
||||
dict.__init__(self, {
|
||||
"index": i + 1,
|
||||
"date": "{}-{:02d}-{:02d}".format(year, month, day),
|
||||
"day_start": datetime(year, month, day, 6, 00, tzinfo=tz).isoformat(),
|
||||
"day_end": datetime(year, month, day + 1, 6, 00, tzinfo=tz).isoformat(),
|
||||
"rooms": {},
|
||||
})
|
||||
else:
|
||||
raise Exception("Either give JSON xor i, year, month, day")
|
||||
|
||||
self.start = dateutil.parser.parse(self["day_start"])
|
||||
self.end = dateutil.parser.parse(self["day_end"])
|
||||
|
||||
def json(self):
|
||||
return self
|
||||
|
||||
|
||||
class Schedule(dict):
|
||||
"""Schedule class with import and export methods"""
|
||||
_tz = None
|
||||
_days: list[ScheduleDay] = []
|
||||
_room_ids = {}
|
||||
origin_url = None
|
||||
origin_system = None
|
||||
stats = None
|
||||
generator = None
|
||||
|
||||
def __init__(self, name: str = None, json=None, version: str = None, conference=None, start_hour=9):
|
||||
if json:
|
||||
dict.__init__(self, json["schedule"])
|
||||
elif conference:
|
||||
dict.__init__(self, {
|
||||
"version": version,
|
||||
"conference": conference
|
||||
})
|
||||
|
||||
if "days" in self["conference"]:
|
||||
self._generate_stats("start" not in self["conference"])
|
||||
|
||||
if "start" not in self["conference"]:
|
||||
self["conference"]["start"] = self.stats.first_event.start.isoformat()
|
||||
if "end" not in self["conference"]:
|
||||
self["conference"]["end"] = self.stats.last_event.end.isoformat()
|
||||
|
||||
if "rooms" not in self["conference"]:
|
||||
# looks like we have an old style schedule json,
|
||||
# so let's construct room map from the scheduling data
|
||||
room_names = {}
|
||||
for day in self["conference"].get("days", []):
|
||||
# TODO: why are don't we use a Set?
|
||||
room_names.update([(k, None) for k in day["rooms"].keys()])
|
||||
self["conference"]["rooms"] = [{"name": name} for name in room_names]
|
||||
|
||||
if "days" not in self["conference"] or len(self["conference"]["days"]) == 0:
|
||||
tz = self.tz()
|
||||
date = tz.localize(self.conference_start()).replace(hour=start_hour)
|
||||
days = []
|
||||
for i in range(self.conference("daysCount")):
|
||||
days.append(ScheduleDay(i, dt=date))
|
||||
date += timedelta(hours=24)
|
||||
self["conference"]["days"] = days
|
||||
|
||||
@classmethod
|
||||
def from_url(cls, url, timeout=10):
|
||||
log.info("Requesting " + url)
|
||||
schedule_r = requests.get(url, timeout=timeout)
|
||||
|
||||
if schedule_r.ok is False:
|
||||
schedule_r.raise_for_status()
|
||||
raise Exception(
|
||||
" Request failed, HTTP {0}.".format(schedule_r.status_code)
|
||||
)
|
||||
|
||||
data = schedule_r.json()
|
||||
|
||||
# add sourounding schedule obj for inproperly formated schedule.json's
|
||||
if "schedule" not in data and "conference" in data:
|
||||
data = {"schedule": data}
|
||||
# move days into conference obj for inproperly formated schedule.json's
|
||||
if "days" in data['schedule']:
|
||||
data['schedule']['conference']['days'] = data['schedule'].pop("days")
|
||||
print(json.dumps(data, indent=2))
|
||||
|
||||
if "version" not in data["schedule"]:
|
||||
data["schedule"]["version"] = ""
|
||||
|
||||
schedule = Schedule(json=data)
|
||||
schedule.origin_url = url
|
||||
schedule.origin_system = urlparse(url).netloc
|
||||
return schedule
|
||||
|
||||
@classmethod
|
||||
def from_file(cls, name):
|
||||
with open(name, "r") as fp:
|
||||
schedule = tools.parse_json(fp.read())
|
||||
return Schedule(json=schedule)
|
||||
|
||||
@classmethod
|
||||
def from_template(
|
||||
cls, title, acronym, year, month, day, days_count=1, tz="Europe/Amsterdam"
|
||||
):
|
||||
schedule = Schedule(
|
||||
version=datetime.now().strftime("%Y:%m-%d %H:%M"),
|
||||
conference={
|
||||
"acronym": acronym.lower(),
|
||||
"title": title,
|
||||
"start": "{}-{:02d}-{:02d}".format(year, month, day),
|
||||
"end": "{}-{:02d}-{:02d}".format(year, month, day + days_count - 1),
|
||||
"daysCount": days_count,
|
||||
"timeslot_duration": "00:15",
|
||||
"time_zone_name": tz,
|
||||
},
|
||||
)
|
||||
tzinfo = pytz.timezone(tz)
|
||||
days = schedule["conference"]["days"]
|
||||
for i in range(days_count):
|
||||
d = ScheduleDay(i, year, month, day + i, tz=tzinfo)
|
||||
days.append(d)
|
||||
|
||||
return schedule
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, template, start_hour=9):
|
||||
schedule = Schedule(json=template)
|
||||
|
||||
return schedule
|
||||
|
||||
@classmethod
|
||||
def from_XC3_template(cls, name, congress_nr, start_day, days_count):
|
||||
year = str(1983 + congress_nr)
|
||||
|
||||
schedule = Schedule(
|
||||
version=datetime.now().strftime("%Y-%m-%d %H:%M"),
|
||||
conference={
|
||||
"acronym": f"{congress_nr}C3" + ("-" + name.lower() if name else ""),
|
||||
"title": f"{congress_nr}. Chaos Communication Congress" + (" - " + name if name else ""),
|
||||
"start": "{}-12-{}".format(year, start_day),
|
||||
"end": "{}-12-{}".format(year, start_day + days_count - 1),
|
||||
"daysCount": days_count,
|
||||
"timeslot_duration": "00:15",
|
||||
"time_zone_name": "Europe/Amsterdam",
|
||||
},
|
||||
)
|
||||
|
||||
return schedule
|
||||
|
||||
@classmethod
|
||||
def empty_copy_of(cls, parent_schedule: 'Schedule', name: str, start_hour=None):
|
||||
schedule = Schedule(
|
||||
version=datetime.now().strftime("%Y:%m-%d %H:%M"),
|
||||
conference=copy.deepcopy(parent_schedule.conference()),
|
||||
)
|
||||
schedule["conference"]["title"] += " - " + name
|
||||
|
||||
for day in schedule["conference"]["days"]:
|
||||
if start_hour is not None:
|
||||
start = dateutil.parser.parse(day["day_start"]).replace(hour=start_hour)
|
||||
day["day_start"] = start.isoformat()
|
||||
day["rooms"] = []
|
||||
|
||||
return schedule
|
||||
|
||||
def reset_generator(self):
|
||||
self.generator = tools.generator_info()
|
||||
|
||||
# TODO: test if this method still works after refactoring of Schedule class to dict child
|
||||
def copy(self, name=None):
|
||||
schedule = copy.deepcopy(self)
|
||||
if name:
|
||||
schedule["conference"]["title"] += f" - {name}"
|
||||
return Schedule(json={"schedule": schedule})
|
||||
|
||||
def version(self):
|
||||
return self["version"]
|
||||
|
||||
def tz(self):
|
||||
if not self._tz:
|
||||
self._tz = pytz.timezone(self.conference("time_zone_name"))
|
||||
return self._tz
|
||||
|
||||
def conference(self, key=None, filter: Callable = None, fallback=None):
|
||||
if key:
|
||||
if filter:
|
||||
return next((item for item in self["conference"][key] if filter(item)), fallback)
|
||||
|
||||
return self["conference"].get(key, fallback)
|
||||
else:
|
||||
return self["conference"]
|
||||
|
||||
def conference_start(self):
|
||||
return dateutil.parser.parse(self.conference("start").split("T")[0])
|
||||
|
||||
def days(self):
|
||||
# TODO return _days object list instead of raw dict/json?
|
||||
return self["conference"]["days"]
|
||||
|
||||
def day(self, day: int):
|
||||
return self.days()[day - 1]
|
||||
|
||||
def room(self, name=None, guid=None):
|
||||
if guid:
|
||||
return self.conference('rooms', lambda x: x['guid'] == guid, {'name': name, 'guid': guid})
|
||||
if name:
|
||||
return self.conference('rooms', lambda x: x['name'] == name, {'name': name, 'guid': guid})
|
||||
|
||||
raise Exception('Either name or guid has to be provided')
|
||||
|
||||
def rooms(self, mode='names'):
|
||||
if mode == 'names':
|
||||
return [room['name'] for room in self.conference('rooms')]
|
||||
elif mode == 'obj':
|
||||
return [Room.from_dict(r) for r in self.conference('rooms')]
|
||||
else:
|
||||
return self.conference('rooms')
|
||||
|
||||
def add_rooms(self, rooms: list, context: EventSourceInterface = {}):
|
||||
if rooms:
|
||||
for x in rooms:
|
||||
self.add_room(x, context)
|
||||
|
||||
def rename_rooms(self, replacements: Dict[str, str|Room]):
|
||||
|
||||
name_replacements = {}
|
||||
|
||||
for old_name_or_guid, new_room in replacements.items():
|
||||
new_name = new_room if isinstance(new_room, str) else new_room.name
|
||||
|
||||
r = self.room(name=old_name_or_guid) or self.room(guid=old_name_or_guid)
|
||||
if r['name'] != new_name:
|
||||
name_replacements[r['name']] = new_name
|
||||
r['name'] = new_name
|
||||
if isinstance(new_room, Room) and new_room.guid:
|
||||
r['guid'] = new_room.guid
|
||||
self._room_ids[new_name] = new_room.guid
|
||||
elif r.get('guid'):
|
||||
self._room_ids[new_name] = r['guid']
|
||||
|
||||
for day in self['conference']['days']:
|
||||
for room_key, events in list(day['rooms'].items()):
|
||||
new_room = replacements.get(room_key, room_key)
|
||||
new_name = new_room if isinstance(new_room, str) else new_room.name
|
||||
|
||||
day['rooms'][new_name] = day['rooms'].pop(room_key)
|
||||
if room_key != new_name:
|
||||
for event in events:
|
||||
event['room'] = new_name
|
||||
|
||||
def add_room(self, room: Union[str, dict], context: EventSourceInterface = {}):
|
||||
# if rooms is str, use the old behaviour – for backwords compability
|
||||
if type(room) is str:
|
||||
for day in self.days():
|
||||
if room not in day["rooms"]:
|
||||
day["rooms"][room] = list()
|
||||
# otherwise add new room dict to confernce
|
||||
elif "name" in room:
|
||||
if room["name"] in self._room_ids and self._room_ids[room["name"]] == room.get('guid'):
|
||||
# we know this room already, so return early
|
||||
return
|
||||
|
||||
if 'location' in context:
|
||||
room['location'] = context['location']
|
||||
|
||||
self.conference("rooms").append(room)
|
||||
self._room_ids[room["name"]] = room.get("guid")
|
||||
self.add_room(room["name"])
|
||||
|
||||
def room_exists(self, day: int, name: str):
|
||||
return name in self.day(day)["rooms"]
|
||||
|
||||
def add_room_on_day(self, day: int, name: str):
|
||||
self.day(day)["rooms"][name] = list()
|
||||
|
||||
def add_room_with_events(self, day: int, target_room, data, origin=None):
|
||||
if not data or len(data) == 0:
|
||||
return
|
||||
|
||||
# log.debug(' adding room {} to day {} with {} events'.format(target_room, day, len(data)))
|
||||
target_day_rooms = self.day(day)["rooms"]
|
||||
|
||||
if self.room_exists(day, target_room):
|
||||
target_day_rooms[target_room] += data
|
||||
else:
|
||||
target_day_rooms[target_room] = data
|
||||
|
||||
|
||||
# TODO this method should work woth both room key and room guid,
|
||||
# but currently it only works with room name
|
||||
def remove_room(self, room_key: str):
|
||||
# if room key is a name, remove it directly from the room list
|
||||
if room_key in self._room_ids:
|
||||
del self._room_ids[room_key]
|
||||
|
||||
obj = self.room(name=room_key)
|
||||
self["conference"]["rooms"].remove(obj)
|
||||
|
||||
if room_key in self._room_ids:
|
||||
del self._room_ids[room_key]
|
||||
|
||||
for day in self["conference"]["days"]:
|
||||
if room_key in day["rooms"]:
|
||||
del day["rooms"][room_key]
|
||||
|
||||
def event(self, guid: str) -> Event:
|
||||
for day in self["conference"]["days"]:
|
||||
for room in day["rooms"]:
|
||||
for event in day["rooms"][room]:
|
||||
if event['guid'] == guid:
|
||||
if isinstance(event, Event):
|
||||
return event
|
||||
else:
|
||||
return Event(event)
|
||||
|
||||
def add_event(self, event: Event, options=None):
|
||||
day = self.get_day_from_time(event.start)
|
||||
if event.get("slug") is None:
|
||||
event["slug"] = "{acronym}-{id}-{name}".format(
|
||||
acronym=self.conference()["acronym"],
|
||||
id=event["id"],
|
||||
name=tools.normalise_string(event["title"]),
|
||||
)
|
||||
|
||||
if not self.room_exists(day, event["room"]):
|
||||
self.add_room_on_day(day, event["room"])
|
||||
|
||||
self.days()[day - 1]["rooms"][event["room"]].append(event)
|
||||
|
||||
def foreach_event(self, func, *args):
|
||||
out = []
|
||||
for day in self["conference"]["days"]:
|
||||
for room in day["rooms"]:
|
||||
for event in day["rooms"][room]:
|
||||
result = func(event if isinstance(event, Event) else Event(event), *args)
|
||||
if result:
|
||||
out.append(result)
|
||||
return out
|
||||
|
||||
def foreach_event_raw(self, func, *args):
|
||||
out = []
|
||||
for day in self["conference"]["days"]:
|
||||
for room in day["rooms"]:
|
||||
for event in day["rooms"][room]:
|
||||
result = func(event, *args)
|
||||
if result:
|
||||
out.append(result)
|
||||
|
||||
return out
|
||||
|
||||
def foreach_day_room(self, func):
|
||||
out = []
|
||||
for day in self["conference"]["days"]:
|
||||
for room in day["rooms"]:
|
||||
result = func(day["rooms"][room])
|
||||
if result:
|
||||
out.append(result)
|
||||
|
||||
return out
|
||||
|
||||
def _generate_stats(self, enable_time_stats=False, verbose=False):
|
||||
class ScheduleStats:
|
||||
min_id = None
|
||||
max_id = None
|
||||
person_min_id = None
|
||||
person_max_id = None
|
||||
events_count = 0
|
||||
first_event: Event = None
|
||||
last_event: Event = None
|
||||
|
||||
self.stats = ScheduleStats()
|
||||
|
||||
def calc_stats(event: Event):
|
||||
self.stats.events_count += 1
|
||||
|
||||
id = int(event["id"])
|
||||
if self.stats.min_id is None or id < self.stats.min_id:
|
||||
self.stats.min_id = id
|
||||
if self.stats.max_id is None or id > self.stats.max_id:
|
||||
self.stats.max_id = id
|
||||
|
||||
if self.stats.first_event is None or event.start < self.stats.first_event.start:
|
||||
self.stats.first_event = event
|
||||
if self.stats.last_event is None or event.start < self.stats.last_event.start:
|
||||
self.stats.last_event = event
|
||||
|
||||
for person in event.get("persons", []):
|
||||
if "id" in person and (isinstance(person["id"], int) or person["id"].isnumeric()):
|
||||
if (
|
||||
self.stats.person_min_id is None
|
||||
or int(person["id"]) < self.stats.person_min_id
|
||||
):
|
||||
self.stats.person_min_id = int(person["id"])
|
||||
if (
|
||||
self.stats.person_max_id is None
|
||||
or int(person["id"]) > self.stats.person_max_id
|
||||
):
|
||||
self.stats.person_max_id = int(person["id"])
|
||||
|
||||
self.foreach_event(calc_stats)
|
||||
|
||||
if verbose:
|
||||
print(f" from {self['conference']['start']} to {self['conference']['end']}")
|
||||
print( " contains {events_count} events, with local ids from {min_id} to {max_id}".format(**self.stats.__dict__)) # noqa
|
||||
print( " local person ids from {person_min_id} to {person_max_id}".format(**self.stats.__dict__)) # noqa
|
||||
print(f" rooms: {', '.join(self.rooms())}")
|
||||
|
||||
|
||||
def get_day_from_time(self, start_time):
|
||||
for i in range(self.conference("daysCount")):
|
||||
day = self.day(i + 1)
|
||||
if day.start <= start_time < day.end:
|
||||
# print(f"Day {day.index}: day.start {day.start} <= start_time {start_time} < day.end {day.end}")
|
||||
#print(f"Day {day['index']}: day.start {day['start'].strftime('%s')} <= start_time {start_time.strftime('%s')} < day.end {day['end'].strftime('%s')}")
|
||||
return day["index"]
|
||||
|
||||
raise Warning(" illegal start time: " + start_time.isoformat())
|
||||
|
||||
def add_events_from(self, other_schedule, id_offset=None, options={}, context: EventSourceInterface = {}):
|
||||
offset = (
|
||||
other_schedule.conference_start() - self.conference_start()
|
||||
).days
|
||||
days = other_schedule.days()
|
||||
# worksround if other schedule starts with index 0 instead of 1
|
||||
if days[0]["index"] == 0:
|
||||
offset += 1
|
||||
|
||||
self["version"] += " " + other_schedule.version()
|
||||
|
||||
if offset:
|
||||
log.warning(" calculated conference start day index offset: {}".format(offset))
|
||||
|
||||
for day in days:
|
||||
target_day = day["index"] + offset
|
||||
|
||||
if target_day < 1:
|
||||
log.warning(f" ignoring day {day['date']} from {other_schedule.conference('acronym')}, as primary schedule starts at {self.conference('start')}")
|
||||
continue
|
||||
|
||||
if day["date"] != self.day(target_day)["date"]:
|
||||
log.error(f" ERROR: the other schedule's days have to match primary schedule, in some extend {day['date']} != {self.day(target_day)['date']}!")
|
||||
return False
|
||||
|
||||
self.add_rooms(other_schedule.conference("rooms"), context)
|
||||
|
||||
for room in day["rooms"]:
|
||||
if options and "room-map" in options and room in options["room-map"]:
|
||||
target_room = options["room-map"][room]
|
||||
|
||||
for event in day["rooms"][room]:
|
||||
event["room"] = target_room
|
||||
elif options and "room-prefix" in options:
|
||||
target_room = options["room-prefix"] + room
|
||||
else:
|
||||
target_room = room
|
||||
|
||||
events = []
|
||||
for event in day["rooms"][room]:
|
||||
if options.get("track"):
|
||||
event["track"] = options['track'](event) if callable(options["track"]) else options["track"]
|
||||
|
||||
if options.get("do_not_record"):
|
||||
event["do_not_record"] = options['do_not_record'](event) if callable(options["do_not_record"]) else options["do_not_record"]
|
||||
|
||||
if options.get("remove_title_additions"):
|
||||
# event["title"], subtitle, event_type = re.match(r"^(.{15,}?)(?:(?::| [–-]+) (.+?))?(?: \((.+?)\))?$", event["title"]).groups()
|
||||
|
||||
match = re.match(r"^(.{5,}?)(?:(?::| [–-]+) (.+?))?(?: \((.+?)\))?$", event["title"])
|
||||
if match:
|
||||
event["title"], subtitle, event_type = match.groups()
|
||||
|
||||
if not event.get("subtitle") and subtitle:
|
||||
event["subtitle"] = subtitle
|
||||
|
||||
if options.get("rewrite_id_from_question"):
|
||||
q = next(
|
||||
(
|
||||
x
|
||||
for x in event["answers"]
|
||||
if x.question == options["rewrite_id_from_question"]
|
||||
),
|
||||
None,
|
||||
)
|
||||
if q is not None:
|
||||
event["id"] = q["answer"]
|
||||
elif id_offset:
|
||||
event["id"] = int(event["id"]) + id_offset
|
||||
# TODO? offset for person IDs?
|
||||
|
||||
# workaround for fresh pretalx instances
|
||||
elif options.get("randomize_small_ids") and int(event["id"]) < 1500:
|
||||
event["id"] = int(re.sub("[^0-9]+", "", event["guid"])[0:4])
|
||||
|
||||
# overwrite slug for pretalx schedule.json input
|
||||
if options.get("overwrite_slug", False):
|
||||
event["slug"] = "{slug}-{id}-{name}".format(
|
||||
slug=self.conference("acronym").lower(),
|
||||
id=event["id"],
|
||||
name=tools.normalise_string(event["title"].split(":")[0]),
|
||||
)
|
||||
|
||||
if options.get("prefix_person_ids"):
|
||||
prefix = options.get("prefix_person_ids")
|
||||
for person in event["persons"]:
|
||||
person["id"] = f"{prefix}-{person['id']}"
|
||||
|
||||
events.append(event if isinstance(event, Event) else Event(event, origin=other_schedule))
|
||||
|
||||
# copy whole day_room to target schedule
|
||||
self.add_room_with_events(target_day, target_room, events)
|
||||
return True
|
||||
|
||||
def find_event(self, id=None, guid=None):
|
||||
if not id and not guid:
|
||||
raise RuntimeError("Please provide either id or guid")
|
||||
|
||||
if id:
|
||||
result = self.foreach_event(
|
||||
lambda event: event if event["id"] == id else None
|
||||
)
|
||||
else:
|
||||
result = self.foreach_event(
|
||||
lambda event: event if event["guid"] == guid else None
|
||||
)
|
||||
|
||||
if len(result) > 1:
|
||||
log.warning("Warning: Found multiple events with id " + id)
|
||||
return result
|
||||
|
||||
if len(result) == 0:
|
||||
raise Warning("could not find event with id " + id)
|
||||
# return None
|
||||
|
||||
return result[0]
|
||||
|
||||
def remove_event(self, id=None, guid=None):
|
||||
if not id and not guid:
|
||||
raise RuntimeError("Please provide either id or guid")
|
||||
|
||||
for day in self["conference"]["days"]:
|
||||
for room in day["rooms"]:
|
||||
for event in day["rooms"][room]:
|
||||
if (
|
||||
event["id"] == id
|
||||
or event["id"] == str(id)
|
||||
or event["guid"] == guid
|
||||
):
|
||||
log.info("removing", event["title"])
|
||||
day["rooms"][room].remove(event)
|
||||
|
||||
# dict_to_etree from http://stackoverflow.com/a/10076823
|
||||
|
||||
# TODO:
|
||||
# * check links conversion
|
||||
# * ' vs " in xml
|
||||
# * logo is in json but not in xml
|
||||
# formerly named dict_to_schedule_xml()
|
||||
def xml(self, method="string"):
|
||||
root_node = None
|
||||
|
||||
def dict_to_attrib(d, root):
|
||||
assert isinstance(d, dict)
|
||||
for k, v in d.items():
|
||||
assert _set_attrib(root, k, v)
|
||||
|
||||
def _set_attrib(tag, k, v):
|
||||
if isinstance(v, str):
|
||||
tag.set(k, v)
|
||||
elif isinstance(v, int):
|
||||
tag.set(k, str(v))
|
||||
else:
|
||||
log.error(" error: unknown attribute type %s=%s" % (k, v))
|
||||
|
||||
def _to_etree(d, node, parent=""):
|
||||
if not d:
|
||||
pass
|
||||
elif isinstance(d, str):
|
||||
node.text = d
|
||||
elif isinstance(d, int):
|
||||
node.text = str(d)
|
||||
elif parent == "person":
|
||||
node.text = d.get("public_name") or d.get('full_public_name') or d.get('full_name') or d.get('name')
|
||||
if "id" in d:
|
||||
_set_attrib(node, "id", d["id"])
|
||||
if "guid" in d:
|
||||
_set_attrib(node, "guid", d["guid"])
|
||||
|
||||
elif (
|
||||
isinstance(d, dict)
|
||||
or isinstance(d, OrderedDict)
|
||||
or isinstance(d, Event)
|
||||
or isinstance(d, ScheduleDay)
|
||||
):
|
||||
# location of base_url sadly differs in frab's json and xml serialization :-(
|
||||
if parent == "schedule" and "base_url" in d:
|
||||
d["conference"]["base_url"] = d["base_url"]
|
||||
del d["base_url"]
|
||||
|
||||
# count variable is used to check how many items actually end as elements
|
||||
# (as they are mapped to an attribute)
|
||||
count = len(d)
|
||||
recording_license = ""
|
||||
for k, v in d.items():
|
||||
if parent == "day":
|
||||
if k[:4] == "day_":
|
||||
# remove day_ prefix from items
|
||||
k = k[4:]
|
||||
|
||||
if (
|
||||
k == "id"
|
||||
or k == "guid"
|
||||
or (parent == "day" and isinstance(v, (str, int)))
|
||||
or parent == "generator"
|
||||
or parent == "track"
|
||||
or parent == "color"
|
||||
):
|
||||
_set_attrib(node, k, v)
|
||||
count -= 1
|
||||
elif k == "url" and parent in ["link", "attachment"]:
|
||||
_set_attrib(node, "href", v)
|
||||
count -= 1
|
||||
elif k == "title" and parent in ["link", "attachment"]:
|
||||
node.text = v
|
||||
elif count == 1 and isinstance(v, str):
|
||||
node.text = v
|
||||
else:
|
||||
node_ = node
|
||||
|
||||
if parent == "room":
|
||||
# create room tag for each instance of a room name
|
||||
node_ = ET.SubElement(node, "room")
|
||||
node_.set("name", k or '')
|
||||
if k in self._room_ids and self._room_ids[k]:
|
||||
node_.set("guid", self._room_ids[k])
|
||||
|
||||
k = "event"
|
||||
|
||||
if k == "days":
|
||||
# in the xml schedule days are not a child of a conference,
|
||||
# but directly in the document node
|
||||
node_ = root_node
|
||||
|
||||
# ignore room list on confernce
|
||||
if k == 'rooms' and parent == 'conference':
|
||||
continue
|
||||
# special handing for collections: days, rooms etc.
|
||||
elif k[-1:] == "s":
|
||||
# don't ask me why the pentabarf schedule xml schema is so inconsistent --Andi
|
||||
# create collection tag for specific tags, e.g. persons, links etc.
|
||||
if parent == "event":
|
||||
node_ = ET.SubElement(node, k)
|
||||
|
||||
# remove last char (which is an s)
|
||||
k = k[:-1]
|
||||
# different notation for conference length in days
|
||||
elif parent == "conference" and k == "daysCount":
|
||||
k = "days"
|
||||
# special handling for recoding_licence and do_not_record flag
|
||||
elif k == "recording_license":
|
||||
# store value for next loop iteration
|
||||
recording_license = v
|
||||
# skip forward to next loop iteration
|
||||
continue
|
||||
elif k == "do_not_stream":
|
||||
# we dont expose this flag to the schedule.xml, only in schedule.json
|
||||
continue
|
||||
elif k == "do_not_record" or k == "recording":
|
||||
k = "recording"
|
||||
# not in schedule.json: license information for an event
|
||||
v = {
|
||||
"license": recording_license,
|
||||
"optout": v,
|
||||
}
|
||||
# new style schedule.json (version 2022-12)
|
||||
elif k == "optout":
|
||||
v = "true" if v is True else "false"
|
||||
|
||||
# iterate over lists
|
||||
if isinstance(v, list):
|
||||
for element in v:
|
||||
_to_etree(element, ET.SubElement(node_, k), k)
|
||||
# don't single empty room tag, as we have to create one for each room, see above
|
||||
elif parent == "day" and k == "room":
|
||||
_to_etree(v, node_, k)
|
||||
else:
|
||||
_to_etree(v, ET.SubElement(node_, k), k)
|
||||
else:
|
||||
assert d == "invalid type"
|
||||
|
||||
assert isinstance(self, dict)
|
||||
|
||||
root_node = ET.Element("schedule")
|
||||
root_node.set("{http://www.w3.org/2001/XMLSchema-instance}noNamespaceSchemaLocation", "https://c3voc.de/schedule/schema.xsd")
|
||||
_to_etree(self, root_node, "schedule")
|
||||
|
||||
if method == 'xml':
|
||||
return root_node
|
||||
elif method == 'bytes':
|
||||
return ET.tostring(root_node, pretty_print=True, xml_declaration=True)
|
||||
|
||||
return ET.tostring(root_node, pretty_print=True, encoding="unicode", doctype='<?xml version="1.0"?>')
|
||||
|
||||
def json(self, method="json", **args):
|
||||
json = {
|
||||
"$schema": "https://c3voc.de/schedule/schema.json",
|
||||
"schedule": {
|
||||
"generator": self.generator or tools.generator_info(),
|
||||
**self
|
||||
}
|
||||
}
|
||||
if method == 'string':
|
||||
return json.dumps(self, indent=2, cls=ScheduleEncoder, **args)
|
||||
|
||||
return json
|
||||
|
||||
def filter(self, name: str, rooms: Union[List[Union[str, Room]], Callable]):
|
||||
log.info(f'\nExporting {name}... ')
|
||||
schedule = self.copy(name)
|
||||
|
||||
if callable(rooms):
|
||||
def filterRoom(room):
|
||||
return rooms(room)
|
||||
else:
|
||||
room_names = set()
|
||||
room_guids = set()
|
||||
for room in rooms:
|
||||
if isinstance(room, Room):
|
||||
if room.guid:
|
||||
room_guids.add(room.guid)
|
||||
if room.name:
|
||||
room_names.add(room.name)
|
||||
|
||||
def filterRoom(room: Room):
|
||||
if isinstance(room, Room):
|
||||
return room.name in room_names or \
|
||||
room.guid in room_guids
|
||||
else:
|
||||
return room['name'] in room_names or \
|
||||
room.get('guid', '') in room_guids
|
||||
|
||||
for room in schedule.rooms(mode='obj'):
|
||||
if not filterRoom(room):
|
||||
log.info(f"deleting room {room.name} on conference")
|
||||
schedule.remove_room(room.name)
|
||||
|
||||
schedule['version'] = self.version().split(';')[0]
|
||||
return schedule
|
||||
|
||||
def export(self, prefix_or_target):
|
||||
"""Export schedule to json and xml files, validate xml"""
|
||||
|
||||
target_json = None
|
||||
target_xml = None
|
||||
|
||||
if prefix_or_target.endswith(".json"):
|
||||
target_json = prefix_or_target
|
||||
elif prefix_or_target.endswith(".xml"):
|
||||
target_xml = prefix_or_target
|
||||
else:
|
||||
target_json = f"{prefix_or_target}.schedule.json"
|
||||
target_xml = f"{prefix_or_target}.schedule.xml"
|
||||
|
||||
if target_json:
|
||||
with open(target_json, "w") as fp:
|
||||
json.dump(self.json(), fp, indent=2, cls=ScheduleEncoder)
|
||||
|
||||
# TODO we should also validate the json file here
|
||||
|
||||
if target_xml:
|
||||
with open(target_xml, "w") as fp:
|
||||
fp.write(self.xml())
|
||||
|
||||
# TODO use python XML validator instead of shell call
|
||||
# validate xml
|
||||
result = os.system(
|
||||
f'/bin/bash -c "{validator} {target_xml} 2>&1 {validator_filter}; exit \\${{PIPESTATUS[0]}}"'
|
||||
)
|
||||
if result != 0 and validator_filter:
|
||||
log.warning(" (validation errors might be hidden by validator_filter)")
|
||||
|
||||
def __str__(self):
|
||||
return json.dumps(self, indent=2, cls=ScheduleEncoder)
|
||||
|
||||
|
||||
class ScheduleEncoder(json.JSONEncoder):
|
||||
def default(self, obj):
|
||||
if isinstance(obj, Schedule):
|
||||
return obj.json()
|
||||
if isinstance(obj, ScheduleDay):
|
||||
return obj
|
||||
if isinstance(obj, Event):
|
||||
return obj.json()
|
||||
return json.JSONEncoder.default(self, obj)
|
||||
383
voc/tools.py
Normal file
383
voc/tools.py
Normal file
|
|
@ -0,0 +1,383 @@
|
|||
# -*- coding: UTF-8 -*-
|
||||
from datetime import timedelta
|
||||
from logging import Logger
|
||||
import argparse
|
||||
from os import path
|
||||
import os
|
||||
import uuid
|
||||
import json
|
||||
import re
|
||||
import sys
|
||||
|
||||
from typing import Dict, Union
|
||||
from collections import OrderedDict
|
||||
from bs4 import Tag
|
||||
from git import Repo
|
||||
|
||||
import __main__
|
||||
|
||||
sos_ids = {}
|
||||
last_edited = {}
|
||||
next_id = 1000
|
||||
generated_ids = 0
|
||||
NAMESPACE_VOC = uuid.UUID('0C9A24B4-72AA-4202-9F91-5A2B6BFF2E6F')
|
||||
VERSION = None
|
||||
|
||||
log = Logger(__name__)
|
||||
|
||||
def DefaultOptionParser(local=False):
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("--online", action="store_true", dest="online", default=False)
|
||||
parser.add_argument(
|
||||
"--fail", action="store_true", dest="exit_when_exception_occours", default=local
|
||||
)
|
||||
parser.add_argument("--stats", action="store_true", dest="only_stats", default=False)
|
||||
parser.add_argument("--git", action="store_true", dest="git", default=False)
|
||||
parser.add_argument("--debug", action="store_true", dest="debug", default=local)
|
||||
return parser
|
||||
|
||||
def write(x):
|
||||
sys.stdout.write(x)
|
||||
sys.stdout.flush()
|
||||
|
||||
|
||||
def set_base_id(value):
|
||||
global next_id
|
||||
next_id = value
|
||||
|
||||
|
||||
def get_id(guid, length=None):
|
||||
# use newer mode without external state if length is set
|
||||
if length:
|
||||
# generate numeric id from first x numbers from guid, stipping leading zeros
|
||||
return int(re.sub("^0+", "", re.sub("[^0-9]+", "", guid))[0:length])
|
||||
|
||||
global sos_ids, next_id, generated_ids
|
||||
if guid not in sos_ids:
|
||||
# generate new id
|
||||
sos_ids[guid] = next_id
|
||||
next_id += 1
|
||||
generated_ids += 1
|
||||
|
||||
return sos_ids[guid]
|
||||
|
||||
|
||||
def load_sos_ids():
|
||||
global sos_ids, next_id, generated_ids
|
||||
if path.isfile("_sos_ids.json"):
|
||||
with open("_sos_ids.json", "r") as fp:
|
||||
# maintain order from file
|
||||
temp = fp.read()
|
||||
sos_ids = json.JSONDecoder(object_pairs_hook=OrderedDict).decode(temp)
|
||||
|
||||
next_id = max(sos_ids.values()) + 1
|
||||
|
||||
|
||||
# write sos_ids to disk
|
||||
def store_sos_ids():
|
||||
global sos_ids
|
||||
with open("_sos_ids.json", "w") as fp:
|
||||
json.dump(sos_ids, fp, indent=4)
|
||||
|
||||
|
||||
def gen_random_uuid():
|
||||
return uuid.uuid4()
|
||||
|
||||
|
||||
def gen_person_uuid(email):
|
||||
return str(uuid.uuid5(uuid.NAMESPACE_URL, 'acct:' + email))
|
||||
|
||||
|
||||
def gen_uuid(value):
|
||||
return str(uuid.uuid5(NAMESPACE_VOC, str(value)))
|
||||
|
||||
|
||||
# deprecated, use Schedule.foreach_event() instead
|
||||
# TODO remove
|
||||
def foreach_event(schedule, func):
|
||||
out = []
|
||||
for day in schedule["schedule"]["conference"]["days"]:
|
||||
for room in day['rooms']:
|
||||
for event in day['rooms'][room]:
|
||||
out.append(func(event))
|
||||
|
||||
return out
|
||||
|
||||
|
||||
def copy_base_structure(subtree, level):
|
||||
ret = OrderedDict()
|
||||
if level > 0:
|
||||
for key, value in subtree.iteritems():
|
||||
if isinstance(value, (str, int)):
|
||||
ret[key] = value
|
||||
elif isinstance(value, list):
|
||||
ret[key] = copy_base_structure_list(value, level - 1)
|
||||
else:
|
||||
ret[key] = copy_base_structure(value, level - 1)
|
||||
return ret
|
||||
|
||||
|
||||
def copy_base_structure_list(subtree, level):
|
||||
ret = []
|
||||
if level > 0:
|
||||
for value in subtree:
|
||||
if isinstance(value, (str, int)):
|
||||
ret.append(value)
|
||||
elif isinstance(value, list):
|
||||
ret.append(copy_base_structure_list(value, level - 1))
|
||||
else:
|
||||
ret.append(copy_base_structure(value, level - 1))
|
||||
return ret
|
||||
|
||||
|
||||
def normalise_string(string):
|
||||
string = string.lower()
|
||||
string = string.replace(u"ä", 'ae')
|
||||
string = string.replace(u'ö', 'oe')
|
||||
string = string.replace(u'ü', 'ue')
|
||||
string = string.replace(u'ß', 'ss')
|
||||
string = re.sub('\W+', '\_', string.strip()) # replace whitespace with _
|
||||
# string = filter(unicode.isalnum, string)
|
||||
string = re.sub('[^a-z0-9_]+', '', string) # TODO: is this not already done with \W+ line above?
|
||||
string = string.strip('_') # remove trailing _
|
||||
|
||||
return string
|
||||
|
||||
|
||||
def normalise_time(timestr):
|
||||
timestr = timestr.replace('p.m.', 'pm')
|
||||
timestr = timestr.replace('a.m.', 'am')
|
||||
# workaround for failure in input file format
|
||||
if timestr.startswith('0:00'):
|
||||
timestr = timestr.replace('0:00', '12:00')
|
||||
|
||||
return timestr
|
||||
|
||||
|
||||
def format_duration(value: Union[int, timedelta]) -> str:
|
||||
if type(value) == timedelta:
|
||||
minutes = round(value.total_seconds() / 60)
|
||||
else:
|
||||
minutes = value
|
||||
|
||||
return '%d:%02d' % divmod(minutes, 60)
|
||||
|
||||
|
||||
# from https://git.cccv.de/hub/hub/-/blob/develop/src/core/utils.py
|
||||
_RE_STR2TIMEDELTA = re.compile(r'((?P<hours>\d+?)hr?\s*)?((?P<minutes>\d+?)m(ins?)?\s*)?((?P<seconds>\d+?)s)?')
|
||||
|
||||
def str2timedelta(s):
|
||||
if ':' in s:
|
||||
parts = s.split(':')
|
||||
kwargs = {'seconds': int(parts.pop())}
|
||||
if parts:
|
||||
kwargs['minutes'] = int(parts.pop())
|
||||
if parts:
|
||||
kwargs['hours'] = int(parts.pop())
|
||||
if parts:
|
||||
kwargs['days'] = int(parts.pop())
|
||||
return timedelta(**kwargs)
|
||||
|
||||
parts = _RE_STR2TIMEDELTA.match(s)
|
||||
if not parts:
|
||||
return
|
||||
parts = parts.groupdict()
|
||||
time_params = {}
|
||||
for name, param in parts.items():
|
||||
if param:
|
||||
time_params[name] = int(param)
|
||||
return timedelta(**time_params)
|
||||
|
||||
def parse_json(text):
|
||||
# this more complex way is necessary
|
||||
# to maintain the same order as in the input file in python2
|
||||
return json.JSONDecoder(object_pairs_hook=OrderedDict).decode(text)
|
||||
|
||||
|
||||
def load_json(filename):
|
||||
with open(filename, "r") as fp:
|
||||
# data = json.load(fp)
|
||||
# maintain order from file
|
||||
data = parse_json(fp.read())
|
||||
return data
|
||||
|
||||
|
||||
def get_version():
|
||||
global VERSION
|
||||
try:
|
||||
if VERSION is None:
|
||||
repo = Repo(path=__file__, search_parent_directories=True)
|
||||
sha = repo.head.object.hexsha
|
||||
VERSION = repo.git.rev_parse(sha, short=5)
|
||||
except ValueError:
|
||||
pass
|
||||
return VERSION
|
||||
|
||||
|
||||
def generator_info():
|
||||
module = path.splitext(path.basename(__main__.__file__))[0] \
|
||||
.replace('schedule_', '')
|
||||
return ({
|
||||
"name": "voc/schedule/" + module,
|
||||
"version": get_version()
|
||||
})
|
||||
|
||||
|
||||
def parse_html_formatted_links(td: Tag) -> Dict[str, str]:
|
||||
"""
|
||||
Returns a dictionary containing all HTML formatted links found
|
||||
in the given table row.
|
||||
|
||||
- Key: The URL of the link.
|
||||
- Value: The title of the link. Might be the same as the URL.
|
||||
|
||||
:param td: A table row HTML tag.
|
||||
"""
|
||||
links = {}
|
||||
for link in td.find_all("a"):
|
||||
href = link.attrs["href"]
|
||||
title = link.attrs["title"].strip()
|
||||
text = link.get_text().strip()
|
||||
links[href] = title if text is None else text
|
||||
|
||||
return links
|
||||
|
||||
|
||||
def ensure_folders_exist(output_dir, secondary_output_dir):
|
||||
global local
|
||||
local = False
|
||||
if not os.path.exists(output_dir):
|
||||
try:
|
||||
if not os.path.exists(secondary_output_dir):
|
||||
os.mkdir(output_dir)
|
||||
else:
|
||||
output_dir = secondary_output_dir
|
||||
local = True
|
||||
except Exception:
|
||||
print('Please create directory named {} if you want to run in local mode'.format(secondary_output_dir))
|
||||
exit(-1)
|
||||
os.chdir(output_dir)
|
||||
|
||||
if not os.path.exists('events'):
|
||||
os.mkdir('events')
|
||||
|
||||
return local
|
||||
|
||||
|
||||
def export_filtered_schedule(output_name, parent_schedule, filter):
|
||||
write('\nExporting {} schedule... '.format(output_name))
|
||||
schedule = parent_schedule.copy(output_name)
|
||||
for day in schedule.days():
|
||||
room_keys = list(day['rooms'].keys())
|
||||
for room_key in room_keys:
|
||||
if not filter(room_key):
|
||||
del day['rooms'][room_key]
|
||||
|
||||
print('\n {}: '.format(output_name))
|
||||
for room in schedule.rooms():
|
||||
print(' - {}'.format(room))
|
||||
|
||||
schedule.export(output_name)
|
||||
return schedule
|
||||
|
||||
|
||||
def git(args):
|
||||
os.system(f'/usr/bin/env git {args}')
|
||||
|
||||
|
||||
def commit_changes_if_something_relevant_changed(schedule):
|
||||
content_did_not_change = os.system("/usr/bin/env git diff -U0 --no-prefix | grep -e '^[+-] ' | grep -v version > /dev/null")
|
||||
|
||||
if content_did_not_change:
|
||||
print('nothing relevant changed, reverting to previous state')
|
||||
git('reset --hard')
|
||||
exit(0)
|
||||
|
||||
git('add *.json *.xml events/*.json')
|
||||
git('commit -m "version {}"'.format(schedule.version()))
|
||||
git('push')
|
||||
|
||||
|
||||
# remove talks starting before 9 am
|
||||
def remove_too_early_events(room):
|
||||
from .schedule import Event
|
||||
|
||||
for e in room:
|
||||
event = e if isinstance(e, Event) else Event(e)
|
||||
start_time = event.start
|
||||
if start_time.hour > 4 and start_time.hour < 9:
|
||||
print('removing {} from full schedule, as it takes place at {} which is too early in the morning'.format(event['title'], start_time.strftime('%H:%M')))
|
||||
room.remove(event)
|
||||
else:
|
||||
break
|
||||
|
||||
|
||||
# harmonize event types
|
||||
def harmonize_event_type(event, options):
|
||||
type_mapping = {
|
||||
|
||||
# TALKS
|
||||
'talk': 'Talk',
|
||||
'talk/panel': 'Talk',
|
||||
'vortrag': 'Talk',
|
||||
'lecture': 'Talk',
|
||||
'beitrag': 'Talk',
|
||||
'track': 'Talk',
|
||||
'live on stage': 'Talk',
|
||||
'recorded': 'Talk',
|
||||
'60 min Talk + 15 min Q&A': 'Talk',
|
||||
'30 min Short Talk + 10 min Q&A': 'Talk',
|
||||
|
||||
# LIGHTNING TALK
|
||||
'lightningtalk': 'Lightning Talk',
|
||||
'lightning_talk': 'Lightning Talk',
|
||||
'lightning-talk': 'Lightning Talk',
|
||||
'Lightning': 'Lightning Talk',
|
||||
|
||||
# MEETUP
|
||||
'meetup': 'Meetup',
|
||||
|
||||
# OTHER
|
||||
'other': 'Other',
|
||||
'': 'Other',
|
||||
'Pausenfüllmaterial': 'Other',
|
||||
|
||||
# PODIUM
|
||||
'podium': 'Podium',
|
||||
|
||||
# PERFORMANCE
|
||||
'theater': 'Performance',
|
||||
'performance': 'Performance',
|
||||
|
||||
# CONCERT
|
||||
'konzert': 'Concert',
|
||||
'concert': 'Concert',
|
||||
|
||||
# DJ Set
|
||||
'dj set': 'DJ Set',
|
||||
'DJ Set': 'DJ Set',
|
||||
|
||||
# WORKSHOP
|
||||
'workshop': 'Workshop',
|
||||
|
||||
# LIVE-PODCAST
|
||||
'Live-Podcast': 'Live-Podcast',
|
||||
}
|
||||
|
||||
type = event.get('type', '').split(' ')
|
||||
if not type or not type[0]:
|
||||
event['type'] = 'Other'
|
||||
elif event.get('type') in type_mapping:
|
||||
event['type'] = type_mapping[event['type']]
|
||||
elif event.get('type').lower() in type_mapping:
|
||||
event['type'] = type_mapping[event['type'].lower()]
|
||||
elif type[0] in type_mapping:
|
||||
event['type'] = type_mapping[type[0]]
|
||||
elif type[0].lower() in type_mapping:
|
||||
event['type'] = type_mapping[type[0].lower()]
|
||||
elif options.debug:
|
||||
log.debug(f"Unknown event type: {event['type']}")
|
||||
|
||||
if event.get('language') is not None:
|
||||
event['language'] = event['language'].lower()
|
||||
|
||||
136
voc/voctoimport.py
Normal file
136
voc/voctoimport.py
Normal file
|
|
@ -0,0 +1,136 @@
|
|||
from os import getenv
|
||||
from sys import stdout
|
||||
import json
|
||||
import time
|
||||
import argparse
|
||||
|
||||
from gql import Client, gql
|
||||
from gql.transport.aiohttp import AIOHTTPTransport
|
||||
# from gql.transport.exceptions import TransportQueryError
|
||||
|
||||
try:
|
||||
from .schedule import Schedule, Event
|
||||
except ImportError:
|
||||
from schedule import Schedule, Event
|
||||
|
||||
transport = AIOHTTPTransport(
|
||||
url=getenv('IMPORT_URL', 'https://import.c3voc.de/graphql'),
|
||||
headers={'Authorization': getenv('IMPORT_TOKEN', 'Basic|Bearer|Token XXXX')}
|
||||
)
|
||||
client = Client(transport=transport, fetch_schema_from_transport=True)
|
||||
args = None
|
||||
|
||||
|
||||
def get_conference(acronym):
|
||||
return client.execute(gql('''
|
||||
query getConference($acronym: String!) {
|
||||
conference: conferenceBySlug(slug: $acronym) {
|
||||
id
|
||||
title
|
||||
}
|
||||
}'''), variable_values={'acronym': acronym})['conference']
|
||||
|
||||
|
||||
def add_event(conference_id, event):
|
||||
data = {
|
||||
"event": {
|
||||
'talkid': event['id'],
|
||||
'persons': ', '.join([p for p in event.persons()]),
|
||||
**(event.voctoimport()),
|
||||
'abstract': event.get('abstract') or '',
|
||||
'published': False,
|
||||
'conferenceId': conference_id
|
||||
}
|
||||
}
|
||||
|
||||
query = gql('''
|
||||
mutation upsertEvent($input: UpsertEventInput!) {
|
||||
upsertEvent(input: $input) {
|
||||
clientMutationId
|
||||
}
|
||||
}
|
||||
''')
|
||||
|
||||
try:
|
||||
client.execute(query, {'input': data})
|
||||
stdout.write('.')
|
||||
stdout.flush()
|
||||
except Exception as e:
|
||||
print(json.dumps(data, indent=2))
|
||||
print()
|
||||
print(e)
|
||||
print()
|
||||
time.sleep(10)
|
||||
|
||||
|
||||
def remove_event(event_guid):
|
||||
try:
|
||||
client.execute(gql('''
|
||||
mutation deleteEvent($guid: UUID!) {
|
||||
deleteEvent(input: {guid: $guid}) { deletedEventNodeId }
|
||||
}
|
||||
'''), {'input': {'guid': event_guid}})
|
||||
except Exception as e:
|
||||
print(e)
|
||||
print()
|
||||
|
||||
|
||||
class VoctoImport:
|
||||
schedule = None
|
||||
conference = None
|
||||
|
||||
def __init__(self, schedule: Schedule, create=False):
|
||||
global args
|
||||
|
||||
self.schedule = schedule
|
||||
acronym = args.conference or args.acronym or schedule.conference('acronym')
|
||||
self.conference = get_conference(acronym)
|
||||
if not self.conference:
|
||||
raise Exception(f'Unknown conference {acronym}')
|
||||
pass
|
||||
|
||||
def upsert_event(self, event):
|
||||
add_event(self.conference['id'], Event(event))
|
||||
|
||||
def depublish_event(self, event_guid):
|
||||
remove_event(event_guid)
|
||||
|
||||
|
||||
def push_schedule(schedule: Schedule, create=False):
|
||||
instace = VoctoImport(schedule, create)
|
||||
schedule.foreach_event(instace.upsert_event)
|
||||
|
||||
|
||||
def run(args):
|
||||
if args.url or args.acronym:
|
||||
schedule = Schedule.from_url(
|
||||
args.url or f'https://pretalx.c3voc.de/{args.acronym}/schedule/export/schedule.json'
|
||||
)
|
||||
else:
|
||||
schedule = Schedule.from_file('jev22/channels.schedule.json')
|
||||
|
||||
instace = VoctoImport(schedule)
|
||||
|
||||
def upsert_event(event):
|
||||
if (len(args.room) == 0 or event['room'] in args.room) and event['do_not_record'] is not True:
|
||||
instace.upsert_event(event)
|
||||
|
||||
try:
|
||||
schedule.foreach_event(upsert_event)
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument('--url', action='store', help='url to schedule.json')
|
||||
parser.add_argument('--acronym', '-a', help='the conference acronym in pretalx.c3voc.de')
|
||||
parser.add_argument('--conference', '-c', help='the confence slug in import.c3voc.de')
|
||||
parser.add_argument('--room', '-r', action='append', help='filter rooms (multiple possible)')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
print(args)
|
||||
|
||||
run(args)
|
||||
print('\nimport done')
|
||||
57
voc/webcal.py
Normal file
57
voc/webcal.py
Normal file
|
|
@ -0,0 +1,57 @@
|
|||
import re
|
||||
import ics
|
||||
import requests
|
||||
|
||||
from voc import GenericConference
|
||||
from voc.event import Event, EventSourceInterface
|
||||
from voc.schedule import Schedule, ScheduleException
|
||||
from voc.tools import format_duration, gen_person_uuid
|
||||
|
||||
|
||||
class WebcalConference(GenericConference, EventSourceInterface):
|
||||
def __init__(self, **args):
|
||||
GenericConference.__init__(self, **args)
|
||||
|
||||
def schedule(self, template: Schedule):
|
||||
if not self.schedule_url or self.schedule_url == 'TBD':
|
||||
raise ScheduleException(' has no schedule url yet – ignoring')
|
||||
|
||||
url = re.sub(r'^webcal', 'http', self.schedule_url)
|
||||
data = requests.get(url, timeout=10).text
|
||||
cal = ics.Calendar(data)
|
||||
|
||||
schedule = template.copy(self['name']) or Schedule(conference=self)
|
||||
|
||||
for e in cal.events:
|
||||
event = Event(convert_to_dict(e, self), origin=self)
|
||||
schedule.add_event(event)
|
||||
|
||||
return schedule
|
||||
|
||||
|
||||
def convert_to_dict(e: ics.Event, context: WebcalConference) -> dict:
|
||||
title, subtitle, event_type = re.match(r"^(.+?)(?:( ?[:–] .+?))?(?: \((.+?)\))?$", e.name).groups()
|
||||
track, = list(e.categories) or [None]
|
||||
return {
|
||||
"guid": e.uid,
|
||||
"title": title,
|
||||
"subtitle": subtitle,
|
||||
"abstract": e.description,
|
||||
"description": '', # empty description for pretalx importer (temporary workaround)
|
||||
"date": e.begin.isoformat(),
|
||||
"start": e.begin.format("HH:mm"),
|
||||
"duration": format_duration(e.duration),
|
||||
"room": e.location or context['name'],
|
||||
"persons": [{
|
||||
"name": p.common_name,
|
||||
"guid": gen_person_uuid(p.email.replace('mailto:', '')),
|
||||
# TODO: add p.role?
|
||||
} for p in e.attendees],
|
||||
"track": track,
|
||||
"type": event_type or 'Other',
|
||||
"url": e.url or None,
|
||||
}
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
WebcalConference()
|
||||
119
voc/webcal2.py
Normal file
119
voc/webcal2.py
Normal file
|
|
@ -0,0 +1,119 @@
|
|||
import re
|
||||
import icalendar
|
||||
import requests
|
||||
|
||||
from voc import GenericConference
|
||||
from voc.event import Event, EventSourceInterface
|
||||
from voc.schedule import Schedule, ScheduleException
|
||||
from voc.tools import format_duration, gen_person_uuid, gen_uuid
|
||||
|
||||
|
||||
class WebcalConference2(GenericConference, EventSourceInterface):
|
||||
def __init__(self, **args):
|
||||
GenericConference.__init__(self, **args)
|
||||
|
||||
def schedule(self, template: Schedule):
|
||||
if not self.schedule_url or self.schedule_url == 'TBD':
|
||||
raise ScheduleException(' has no schedule url yet – ignoring')
|
||||
|
||||
url = re.sub(r'^webcal', 'http', self.schedule_url)
|
||||
r = requests.get(url, timeout=10)
|
||||
if r.status_code != 200:
|
||||
raise ScheduleException(f' Failed to retrieve iCal feed: Error ({r.status_code})')
|
||||
cal = icalendar.Calendar.from_ical(r.text)
|
||||
|
||||
schedule = template.copy(self['name']) or Schedule(conference=self)
|
||||
|
||||
for e in cal.walk('vevent'):
|
||||
try:
|
||||
event = Event(convert_to_dict(e, self), origin=self)
|
||||
schedule.add_event(event)
|
||||
except Exception as e:
|
||||
print(e)
|
||||
|
||||
return schedule
|
||||
|
||||
|
||||
def convert_to_dict(e: icalendar.Event, context: WebcalConference2) -> dict:
|
||||
# title, subtitle, event_type = re.match(r"^(.+?)(?:( ?[:–] .+?))?(?: \((.+?)\))?$", e.name).groups()
|
||||
track, = [ str(c) for c in e.get('categories').cats] or [None]
|
||||
begin = e['dtstart'].dt
|
||||
end = e['dtend'].dt
|
||||
duration = end - begin
|
||||
|
||||
return { k: (v if isinstance(v, list) or v is None else str(v)) for k, v in {
|
||||
"guid": gen_uuid(e['uid']),
|
||||
"id": e['event-id'],
|
||||
"title": e.get('summary'),
|
||||
"subtitle": '',
|
||||
"abstract": e['description'],
|
||||
"description": '', # empty description for pretalx importer (temporary workaround)
|
||||
"date": begin.isoformat(),
|
||||
"start": begin.strftime("%H:%M"),
|
||||
"duration": format_duration(duration),
|
||||
"room": track, #context['name'],
|
||||
"persons": [{
|
||||
**p,
|
||||
"id": 0
|
||||
} for p in extract_persons(e)],
|
||||
"track": track,
|
||||
"language": 'de',
|
||||
"type": 'Session' or 'Other',
|
||||
"url": e.get('url', None),
|
||||
}.items() }
|
||||
|
||||
def extract_persons(e: icalendar.Event) -> list:
|
||||
person_str = str(e.get('location', '')).replace(' und ', '; ').strip()
|
||||
print(person_str)
|
||||
# persons = re.split(r'\s*[,;/]\s*', person_str)
|
||||
persons = re.split(r'[,;/](?![^()]*\))', person_str)
|
||||
|
||||
if len(persons) == 0:
|
||||
return []
|
||||
pattern = r'([^()]+)(?:\((\w{2,3}\s+)?([^)]*)\))'
|
||||
|
||||
result = []
|
||||
for p in persons:
|
||||
# p is either "name (org)" or or "name (org role)" or "name (name@org.tld)"
|
||||
match = re.match(pattern, p)
|
||||
if match:
|
||||
name, org, role = match.groups()
|
||||
if role and '@' in role:
|
||||
match = re.search(r'@(.+)(\.de)?$', role)
|
||||
org = match.group(1)
|
||||
result.append({
|
||||
"name": name.strip(),
|
||||
"org": org.strip(),
|
||||
"email": role.strip(),
|
||||
"guid": gen_person_uuid(role)
|
||||
})
|
||||
else:
|
||||
if not org:
|
||||
if len(role) <= 3:
|
||||
org = role
|
||||
role = None
|
||||
else:
|
||||
# try catch `Distribution Cordinator, ZER` and split org
|
||||
m = re.match(r'^(.+?), (\w{2,3})$', role)
|
||||
if m:
|
||||
org = m.group(2)
|
||||
role = m.group(1)
|
||||
|
||||
if name:
|
||||
result.append({
|
||||
"name": name.strip(),
|
||||
"org": org.strip() if org else None,
|
||||
"role": role.strip() if role else None,
|
||||
})
|
||||
elif p:
|
||||
result.append({
|
||||
"name": p.strip(),
|
||||
})
|
||||
|
||||
print(result)
|
||||
print()
|
||||
return result
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
WebcalConference()
|
||||
Loading…
Add table
Add a link
Reference in a new issue