Changelog
ForgeRock continuously provides updates to Autonomous Identity to introduce new features, fix known bugs and address security issues.
Key fixes
- 2022.11.8
-
This release contains a collection of security and bug fixes.
- 2022.11.7
-
This release contains a collection of security and bug fixes. Additionally, Autonomous Identity requires Opensearch 1.3.13 in this release.
- 2022.11.6
-
This release contains the latest container images.
- 2022.11.5
-
This release contains a collection of security and bug fixes.
- 2022.11.4
-
This release contains a collection of security and bug fixes.
- 2022.11.3
-
The following bugs were fixed in this release as well as other security fixes:
-
AUTOID-3174: Need an assignments API
-
AUTOID-3362: Allow customer to change timeout for API container when run Opensearch query
-
- 2022.11.2
-
The following bugs were fixed in this release:
-
AUTOID-3329: Misspelled http header for kibana conf
-
AUTOID-3331: Elasticsearch keystore and truststore password
-
- 2022.11.1
-
This release contains a collection of important security fixes.
- 2022.11.0
-
The following bugs were fixed in this release as well as other security fixes:
-
AUTOID-2766: Analytics results show inconsistent results
-
AUTOID-2864: Not able to delete data sources in AutoID
-
AUTOID-2894: Support for updating all certificates in AutoID
-
AUTOID-3130: Upgrade Spark to 3.3
-
AUTOID-3135: Upgrade Open Distro to Opensearch
-
AUTOID-3145: Upgrade Python to 3.8
-
AUTOID-3160: Upgrade OpenJDK to 11
-
Known Issues
- 2022.11.8
-
There are no known issues in this release.
- 2022.11.5
-
There are no known issues in this release.
- 2022.11.4
-
There are no known issues in this release.
- 2022.11.3
-
-
Discovered regression
Autonomous Identity 2022.11.3 was originally released on 04-11-2023.
We discovered a regression where Apache Livy has log4j1 binaries included with the deployer. If you installed 2022.11.3 before 04/13/2023, run the steps below to upgrade log4j1 to log4j2.
If you installed 2022.11.3 after 04/13/2023, the binaries are updated, and you do not need to upgrade log4j1 binaries.
Update log4j1 to log4j2-
Stop the Apache Livy server:
~/livy/bin/livy-server stop
-
Back up your old log4j and related jar files:
cd ~/livy/jars mv log4j-1.2.16.jar ~/log4j-1.2.16.jar.bkp mv slf4j-log4j12-1.6.1.jar ~/slf4j-log4j12-1.6.1.jar.bkp mv slf4j-reload4j-1.7.36.jar ~/slf4j-reload4j-1.7.36.jar.bkp mv slf4j-api-1.7.25.jar ~/slf4j-api-1.7.25.jar.bkp
-
Replace with log4j2 jar and its bridge jars:
cd ~/livy/jars wget https://repo1.maven.org/maven2/org/apache/logging/log4j/log4j-1.2-api/2.18.0/log4j-1.2-api-2.18.0.jar wget https://repo1.maven.org/maven2/org/apache/logging/log4j/log4j-core/2.18.0/log4j-core-2.18.0.jar wget https://repo1.maven.org/maven2/org/apache/logging/log4j/log4j-slf4j-impl/2.18.0/log4j-slf4j-impl-2.18.0.jar wget https://repo1.maven.org/maven2/org/apache/logging/log4j/log4j-api/2.18.0/log4j-api-2.18.0.jar wget https://repo1.maven.org/maven2/org/slf4j/slf4j-api/1.7.36/slf4j-api-1.7.36.jar
-
Under the
conf
folder, create alog4j2.properties
file:cd ~/livy/conf vi log4j2.properties
-
In your
log4j2.properties
file, adjust the log level and related configuration suited for your requirements:status = info name= RollingFileLogConfigDemo # Log files location property.basePath = ./logs # RollingFileAppender name, pattern, path and rollover policy appender.rolling.type = RollingFile appender.rolling.name = fileLogger appender.rolling.fileName= ${basePath}/autoid.log appender.rolling.filePattern= ${basePath}/autoid_%d{yyyyMMdd}.log.gz appender.rolling.layout.type = PatternLayout appender.rolling.layout.pattern = %d{yyyy-MM-dd HH:mm:ss.SSS} %level [%t] [%l] - %msg%n appender.rolling.policies.type = Policies # RollingFileAppender rotation policy appender.rolling.policies.size.type = SizeBasedTriggeringPolicy appender.rolling.policies.size.size = 10MB appender.rolling.policies.time.type = TimeBasedTriggeringPolicy appender.rolling.policies.time.interval = 1 appender.rolling.policies.time.modulate = true appender.rolling.strategy.type = DefaultRolloverStrategy appender.rolling.strategy.delete.type = Delete appender.rolling.strategy.delete.basePath = ${basePath} appender.rolling.strategy.delete.maxDepth = 10 appender.rolling.strategy.delete.ifLastModified.type = IfLastModified # Delete all files older than 30 days appender.rolling.strategy.delete.ifLastModified.age = 30d # Configure root logger rootLogger.level = info rootLogger.appenderRef.rolling.ref = fileLogger log4j1.compatibility = true
-
Restart Apache Livy:
cd ~/livy/ ./bin/livy-server start
-
Check that Apache Livy is up and running. You can access a log on an analytics jobs. Specific Autonomous Identity logs are at
~/livy/logs/autoid.log.
-
-
- 2022.11.2
-
There are no known issues in this release.
- 2022.11.1
-
There are no known issues in this release.
- 2022.11.0
-
There is a known issue with RHEL8/CentOS Stream 8 when Docker swarm overlay network configuration breaks when the outside network maximum transmission unit (mtu) is smaller than the default value. The
mtu
is the maximum size of the packet that can be transmitted from a network interface.Refer to https://github.com/moby/libnetwork/issues/2661 and https://github.com/moby/moby/pull/43197.
When deploying a multinode configuration on RHEL 8/CentOS Stream 8, run the following steps:
-
Check mtu for docker0 and eth0 using
ifconfig | grep mtu
. -
Set the docker0 mtu value to be equal to
eth0
usingsudo ifconfig eth0 mtu 1500
. Make sure to set the command on all nodes and also after each virtual machine reboot.
-